Menu
Home Explore People Places Arts History Plants & Animals Science Life & Culture Technology
On this page
Q-function
Statistics function

In statistics, the Q-function is the tail distribution function of the standard normal distribution. In other words, Q ( x ) {\displaystyle Q(x)} is the probability that a normal (Gaussian) random variable will obtain a value larger than x {\displaystyle x} standard deviations. Equivalently, Q ( x ) {\displaystyle Q(x)} is the probability that a standard normal random variable takes a value larger than x {\displaystyle x} .

If Y {\displaystyle Y} is a Gaussian random variable with mean μ {\displaystyle \mu } and variance σ 2 {\displaystyle \sigma ^{2}} , then X = Y − μ σ {\displaystyle X={\frac {Y-\mu }{\sigma }}} is standard normal and

P ( Y > y ) = P ( X > x ) = Q ( x ) {\displaystyle P(Y>y)=P(X>x)=Q(x)}

where x = y − μ σ {\displaystyle x={\frac {y-\mu }{\sigma }}} .

Other definitions of the Q-function, all of which are simple transformations of the normal cumulative distribution function, are also used occasionally.

Because of its relation to the cumulative distribution function of the normal distribution, the Q-function can also be expressed in terms of the error function, which is an important function in applied mathematics and physics.

Related Image Collections Add Image
We don't have any YouTube videos related to Q-function yet.
We don't have any PDF documents related to Q-function yet.
We don't have any Books related to Q-function yet.
We don't have any archived web articles related to Q-function yet.

Definition and basic properties

Formally, the Q-function is defined as

Q ( x ) = 1 2 π ∫ x ∞ exp ⁡ ( − u 2 2 ) d u . {\displaystyle Q(x)={\frac {1}{\sqrt {2\pi }}}\int _{x}^{\infty }\exp \left(-{\frac {u^{2}}{2}}\right)\,du.}

Thus,

Q ( x ) = 1 − Q ( − x ) = 1 − Φ ( x ) , {\displaystyle Q(x)=1-Q(-x)=1-\Phi (x)\,\!,}

where Φ ( x ) {\displaystyle \Phi (x)} is the cumulative distribution function of the standard normal Gaussian distribution.

The Q-function can be expressed in terms of the error function, or the complementary error function, as4

Q ( x ) = 1 2 ( 2 π ∫ x / 2 ∞ exp ⁡ ( − t 2 ) d t ) = 1 2 − 1 2 erf ⁡ ( x 2 )      -or- = 1 2 erfc ⁡ ( x 2 ) . {\displaystyle {\begin{aligned}Q(x)&={\frac {1}{2}}\left({\frac {2}{\sqrt {\pi }}}\int _{x/{\sqrt {2}}}^{\infty }\exp \left(-t^{2}\right)\,dt\right)\\&={\frac {1}{2}}-{\frac {1}{2}}\operatorname {erf} \left({\frac {x}{\sqrt {2}}}\right)~~{\text{ -or-}}\\&={\frac {1}{2}}\operatorname {erfc} \left({\frac {x}{\sqrt {2}}}\right).\end{aligned}}}

An alternative form of the Q-function known as Craig's formula, after its discoverer, is expressed as:5

Q ( x ) = 1 π ∫ 0 π 2 exp ⁡ ( − x 2 2 sin 2 ⁡ θ ) d θ . {\displaystyle Q(x)={\frac {1}{\pi }}\int _{0}^{\frac {\pi }{2}}\exp \left(-{\frac {x^{2}}{2\sin ^{2}\theta }}\right)d\theta .}

This expression is valid only for positive values of x, but it can be used in conjunction with Q(x) = 1 − Q(−x) to obtain Q(x) for negative values. This form is advantageous in that the range of integration is fixed and finite.

Craig's formula was later extended by Behnad (2020)6 for the Q-function of the sum of two non-negative variables, as follows:

Q ( x + y ) = 1 π ∫ 0 π 2 exp ⁡ ( − x 2 2 sin 2 ⁡ θ − y 2 2 cos 2 ⁡ θ ) d θ , x , y ⩾ 0. {\displaystyle Q(x+y)={\frac {1}{\pi }}\int _{0}^{\frac {\pi }{2}}\exp \left(-{\frac {x^{2}}{2\sin ^{2}\theta }}-{\frac {y^{2}}{2\cos ^{2}\theta }}\right)d\theta ,\quad x,y\geqslant 0.}

Bounds and approximations

( x 1 + x 2 ) ϕ ( x ) < Q ( x ) < ϕ ( x ) x , x > 0 , {\displaystyle \left({\frac {x}{1+x^{2}}}\right)\phi (x)<Q(x)<{\frac {\phi (x)}{x}},\qquad x>0,} where ϕ ( x ) {\displaystyle \phi (x)} is the density function of the standard normal distribution, and the bounds become increasingly tight for large x. Using the substitution v =u2/2, the upper bound is derived as follows: Q ( x ) = ∫ x ∞ ϕ ( u ) d u < ∫ x ∞ u x ϕ ( u ) d u = ∫ x 2 2 ∞ e − v x 2 π d v = − e − v x 2 π | x 2 2 ∞ = ϕ ( x ) x . {\displaystyle Q(x)=\int _{x}^{\infty }\phi (u)\,du<\int _{x}^{\infty }{\frac {u}{x}}\phi (u)\,du=\int _{\frac {x^{2}}{2}}^{\infty }{\frac {e^{-v}}{x{\sqrt {2\pi }}}}\,dv=-{\biggl .}{\frac {e^{-v}}{x{\sqrt {2\pi }}}}{\biggr |}_{\frac {x^{2}}{2}}^{\infty }={\frac {\phi (x)}{x}}.} Similarly, using ϕ ′ ( u ) = − u ϕ ( u ) {\displaystyle \phi '(u)=-u\phi (u)} and the quotient rule, ( 1 + 1 x 2 ) Q ( x ) = ∫ x ∞ ( 1 + 1 x 2 ) ϕ ( u ) d u > ∫ x ∞ ( 1 + 1 u 2 ) ϕ ( u ) d u = − ϕ ( u ) u | x ∞ = ϕ ( x ) x . {\displaystyle \left(1+{\frac {1}{x^{2}}}\right)Q(x)=\int _{x}^{\infty }\left(1+{\frac {1}{x^{2}}}\right)\phi (u)\,du>\int _{x}^{\infty }\left(1+{\frac {1}{u^{2}}}\right)\phi (u)\,du=-{\biggl .}{\frac {\phi (u)}{u}}{\biggr |}_{x}^{\infty }={\frac {\phi (x)}{x}}.} Solving for Q(x) provides the lower bound. The geometric mean of the upper and lower bound gives a suitable approximation for Q ( x ) {\displaystyle Q(x)} : Q ( x ) ≈ ϕ ( x ) 1 + x 2 , x ≥ 0. {\displaystyle Q(x)\approx {\frac {\phi (x)}{\sqrt {1+x^{2}}}},\qquad x\geq 0.}
  • Tighter bounds and approximations of Q ( x ) {\displaystyle Q(x)} can also be obtained by optimizing the following expression 9
Q ~ ( x ) = ϕ ( x ) ( 1 − a ) x + a x 2 + b . {\displaystyle {\tilde {Q}}(x)={\frac {\phi (x)}{(1-a)x+a{\sqrt {x^{2}+b}}}}.} For x ≥ 0 {\displaystyle x\geq 0} , the best upper bound is given by a = 0.344 {\displaystyle a=0.344} and b = 5.334 {\displaystyle b=5.334} with maximum absolute relative error of 0.44%. Likewise, the best approximation is given by a = 0.339 {\displaystyle a=0.339} and b = 5.510 {\displaystyle b=5.510} with maximum absolute relative error of 0.27%. Finally, the best lower bound is given by a = 1 / π {\displaystyle a=1/\pi } and b = 2 π {\displaystyle b=2\pi } with maximum absolute relative error of 1.17%. Q ( x ) ≤ e − x 2 2 , x > 0 {\displaystyle Q(x)\leq e^{-{\frac {x^{2}}{2}}},\qquad x>0}
  • Improved exponential bounds and a pure exponential approximation are 10
Q ( x ) ≤ 1 4 e − x 2 + 1 4 e − x 2 2 ≤ 1 2 e − x 2 2 , x > 0 {\displaystyle Q(x)\leq {\tfrac {1}{4}}e^{-x^{2}}+{\tfrac {1}{4}}e^{-{\frac {x^{2}}{2}}}\leq {\tfrac {1}{2}}e^{-{\frac {x^{2}}{2}}},\qquad x>0} Q ( x ) ≈ 1 12 e − x 2 2 + 1 4 e − 2 3 x 2 , x > 0 {\displaystyle Q(x)\approx {\frac {1}{12}}e^{-{\frac {x^{2}}{2}}}+{\frac {1}{4}}e^{-{\frac {2}{3}}x^{2}},\qquad x>0}
  • The above were generalized by Tanash & Riihonen (2020),11 who showed that Q ( x ) {\displaystyle Q(x)} can be accurately approximated or bounded by
Q ~ ( x ) = ∑ n = 1 N a n e − b n x 2 . {\displaystyle {\tilde {Q}}(x)=\sum _{n=1}^{N}a_{n}e^{-b_{n}x^{2}}.} In particular, they presented a systematic methodology to solve the numerical coefficients { ( a n , b n ) } n = 1 N {\displaystyle \{(a_{n},b_{n})\}_{n=1}^{N}} that yield a minimax approximation or bound: Q ( x ) ≈ Q ~ ( x ) {\displaystyle Q(x)\approx {\tilde {Q}}(x)} , Q ( x ) ≤ Q ~ ( x ) {\displaystyle Q(x)\leq {\tilde {Q}}(x)} , or Q ( x ) ≥ Q ~ ( x ) {\displaystyle Q(x)\geq {\tilde {Q}}(x)} for x ≥ 0 {\displaystyle x\geq 0} . With the example coefficients tabulated in the paper for N = 20 {\displaystyle N=20} , the relative and absolute approximation errors are less than 2.831 ⋅ 10 − 6 {\displaystyle 2.831\cdot 10^{-6}} and 1.416 ⋅ 10 − 6 {\displaystyle 1.416\cdot 10^{-6}} , respectively. The coefficients { ( a n , b n ) } n = 1 N {\displaystyle \{(a_{n},b_{n})\}_{n=1}^{N}} for many variations of the exponential approximations and bounds up to N = 25 {\displaystyle N=25} have been released to open access as a comprehensive dataset.12
  • Another approximation of Q ( x ) {\displaystyle Q(x)} for x ∈ [ 0 , ∞ ) {\displaystyle x\in [0,\infty )} is given by Karagiannidis & Lioumpas (2007)13 who showed for the appropriate choice of parameters { A , B } {\displaystyle \{A,B\}} that
f ( x ; A , B ) = ( 1 − e − A x ) e − x 2 B π x ≈ erfc ⁡ ( x ) . {\displaystyle f(x;A,B)={\frac {\left(1-e^{-Ax}\right)e^{-x^{2}}}{B{\sqrt {\pi }}x}}\approx \operatorname {erfc} \left(x\right).} The absolute error between f ( x ; A , B ) {\displaystyle f(x;A,B)} and erfc ⁡ ( x ) {\displaystyle \operatorname {erfc} (x)} over the range [ 0 , R ] {\displaystyle [0,R]} is minimized by evaluating { A , B } = arg ⁡ min { A , B } 1 R ∫ 0 R | f ( x ; A , B ) − erfc ⁡ ( x ) | d x . {\displaystyle \{A,B\}={\underset {\{A,B\}}{\arg \min }}{\frac {1}{R}}\int _{0}^{R}|f(x;A,B)-\operatorname {erfc} (x)|dx.} Using R = 20 {\displaystyle R=20} and numerically integrating, they found the minimum error occurred when { A , B } = { 1.98 , 1.135 } , {\displaystyle \{A,B\}=\{1.98,1.135\},} which gave a good approximation for ∀ x ≥ 0. {\displaystyle \forall x\geq 0.} Substituting these values and using the relationship between Q ( x ) {\displaystyle Q(x)} and erfc ⁡ ( x ) {\displaystyle \operatorname {erfc} (x)} from above gives Q ( x ) ≈ ( 1 − e − 1.98 x 2 ) e − x 2 2 1.135 2 π x , x ≥ 0. {\displaystyle Q(x)\approx {\frac {\left(1-e^{\frac {-1.98x}{\sqrt {2}}}\right)e^{-{\frac {x^{2}}{2}}}}{1.135{\sqrt {2\pi }}x}},x\geq 0.} Alternative coefficients are also available for the above 'Karagiannidis–Lioumpas approximation' for tailoring accuracy for a specific application or transforming it into a tight bound.14
  • A tighter and more tractable approximation of Q ( x ) {\displaystyle Q(x)} for positive arguments Failed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle x \in [0,\infty)} is given by López-Benítez & Casadevall (2011)15 based on a second-order exponential function:
Q ( x ) ≈ e − a x 2 − b x − c , x ≥ 0. {\displaystyle Q(x)\approx e^{-ax^{2}-bx-c},\qquad x\geq 0.} The fitting coefficients ( a , b , c ) {\displaystyle (a,b,c)} can be optimized over any desired range of arguments in order to minimize the sum of square errors ( a = 0.3842 {\displaystyle a=0.3842} , b = 0.7640 {\displaystyle b=0.7640} , c = 0.6964 {\displaystyle c=0.6964} for x ∈ [ 0 , 20 ] {\displaystyle x\in [0,20]} ) or minimize the maximum absolute error ( a = 0.4920 {\displaystyle a=0.4920} , b = 0.2887 {\displaystyle b=0.2887} , c = 1.1893 {\displaystyle c=1.1893} for x ∈ [ 0 , 20 ] {\displaystyle x\in [0,20]} ). This approximation offers some benefits such as a good trade-off between accuracy and analytical tractability (for example, the extension to any arbitrary power of Q ( x ) {\displaystyle Q(x)} is trivial and does not alter the algebraic form of the approximation).
  • A pair of tight lower and upper bounds on the Gaussian Q-function for positive arguments x ∈ [ 0 , ∞ ) {\displaystyle x\in [0,\infty )} was introduced by Abreu (2012)16 based on a simple algebraic expression with only two exponential terms:
Q ( x ) ≥ 1 12 e − x 2 + 1 2 π ( x + 1 ) e − x 2 / 2 , x ≥ 0 , {\displaystyle Q(x)\geq {\frac {1}{12}}e^{-x^{2}}+{\frac {1}{{\sqrt {2\pi }}(x+1)}}e^{-x^{2}/2},\qquad x\geq 0,} Q ( x ) ≤ 1 50 e − x 2 + 1 2 ( x + 1 ) e − x 2 / 2 , x ≥ 0. {\displaystyle Q(x)\leq {\frac {1}{50}}e^{-x^{2}}+{\frac {1}{2(x+1)}}e^{-x^{2}/2},\qquad x\geq 0.}

These bounds are derived from a unified form Q B ( x ; a , b ) = exp ⁡ ( − x 2 ) a + exp ⁡ ( − x 2 / 2 ) b ( x + 1 ) {\displaystyle Q_{\mathrm {B} }(x;a,b)={\frac {\exp(-x^{2})}{a}}+{\frac {\exp(-x^{2}/2)}{b(x+1)}}} , where the parameters a {\displaystyle a} and b {\displaystyle b} are chosen to satisfy specific conditions ensuring the lower ( a L = 12 {\displaystyle a_{\mathrm {L} }=12} , b L = 2 π {\displaystyle b_{\mathrm {L} }={\sqrt {2\pi }}} ) and upper ( a U = 50 {\displaystyle a_{\mathrm {U} }=50} , b U = 2 {\displaystyle b_{\mathrm {U} }=2} ) bounding properties. The resulting expressions are notable for their simplicity and tightness, offering a favorable trade-off between accuracy and mathematical tractability. These bounds are particularly useful in theoretical analysis, such as in communication theory over fading channels. Additionally, they can be extended to bound Q n ( x ) {\displaystyle Q^{n}(x)} for positive integers n {\displaystyle n} using the binomial theorem, maintaining their simplicity and effectiveness.

Inverse Q

The inverse Q-function can be related to the inverse error functions:

Q − 1 ( y ) = 2   e r f − 1 ( 1 − 2 y ) = 2   e r f c − 1 ( 2 y ) {\displaystyle Q^{-1}(y)={\sqrt {2}}\ \mathrm {erf} ^{-1}(1-2y)={\sqrt {2}}\ \mathrm {erfc} ^{-1}(2y)}

The function Q − 1 ( y ) {\displaystyle Q^{-1}(y)} finds application in digital communications. It is usually expressed in dB and generally called Q-factor:

Q - f a c t o r = 20 log 10 ( Q − 1 ( y ) )   d B {\displaystyle \mathrm {Q{\text{-}}factor} =20\log _{10}\!\left(Q^{-1}(y)\right)\!~\mathrm {dB} }

where y is the bit-error rate (BER) of the digitally modulated signal under analysis. For instance, for quadrature phase-shift keying (QPSK) in additive white Gaussian noise, the Q-factor defined above coincides with the value in dB of the signal to noise ratio that yields a bit error rate equal to y.

Values

The Q-function is well tabulated and can be computed directly in most of the mathematical software packages such as R and those available in Python, MATLAB and Mathematica. Some values of the Q-function are given below for reference.

Q(0.0)0.5000000001/2.0000
Q(0.1)0.4601721631/2.1731
Q(0.2)0.4207402911/2.3768
Q(0.3)0.3820885781/2.6172
Q(0.4)0.3445782581/2.9021
Q(0.5)0.3085375391/3.2411
Q(0.6)0.2742531181/3.6463
Q(0.7)0.2419636521/4.1329
Q(0.8)0.2118553991/4.7202
Q(0.9)0.1840601251/5.4330
Q(1.0)0.1586552541/6.3030
Q(1.1)0.1356660611/7.3710
Q(1.2)0.1150696701/8.6904
Q(1.3)0.0968004851/10.3305
Q(1.4)0.0807566591/12.3829
Q(1.5)0.0668072011/14.9684
Q(1.6)0.0547992921/18.2484
Q(1.7)0.0445654631/22.4389
Q(1.8)0.0359303191/27.8316
Q(1.9)0.0287165601/34.8231
Q(2.0)0.0227501321/43.9558
Q(2.1)0.0178644211/55.9772
Q(2.2)0.0139034481/71.9246
Q(2.3)0.0107241101/93.2478
Q(2.4)0.0081975361/121.9879
Q(2.5)0.0062096651/161.0393
Q(2.6)0.0046611881/214.5376
Q(2.7)0.0034669741/288.4360
Q(2.8)0.0025551301/391.3695
Q(2.9)0.0018658131/535.9593
Q(3.0)0.0013498981/740.7967
Q(3.1)0.0009676031/1033.4815
Q(3.2)0.0006871381/1455.3119
Q(3.3)0.0004834241/2068.5769
Q(3.4)0.0003369291/2967.9820
Q(3.5)0.0002326291/4298.6887
Q(3.6)0.0001591091/6285.0158
Q(3.7)0.0001078001/9276.4608
Q(3.8)0.0000723481/13822.0738
Q(3.9)0.0000480961/20791.6011
Q(4.0)0.0000316711/31574.3855

Generalization to high dimensions

The Q-function can be generalized to higher dimensions:17

Q ( x ) = P ( X ≥ x ) , {\displaystyle Q(\mathbf {x} )=\mathbb {P} (\mathbf {X} \geq \mathbf {x} ),}

where X ∼ N ( 0 , Σ ) {\displaystyle \mathbf {X} \sim {\mathcal {N}}(\mathbf {0} ,\,\Sigma )} follows the multivariate normal distribution with covariance Σ {\displaystyle \Sigma } and the threshold is of the form x = γ Σ l ∗ {\displaystyle \mathbf {x} =\gamma \Sigma \mathbf {l} ^{*}} for some positive vector l ∗ > 0 {\displaystyle \mathbf {l} ^{*}>\mathbf {0} } and positive constant γ > 0 {\displaystyle \gamma >0} . As in the one dimensional case, there is no simple analytical formula for the Q-function. Nevertheless, the Q-function can be approximated arbitrarily well as γ {\displaystyle \gamma } becomes larger and larger.1819

References

  1. "The Q-function". cnx.org. Archived from the original on 2012-02-29. https://web.archive.org/web/20120229030808/http://cnx.org/content/m11537/latest/

  2. "Basic properties of the Q-function" (PDF). 2009-03-05. Archived from the original (PDF) on 2009-03-25. https://web.archive.org/web/20090325160012/http://www.eng.tau.ac.il/~jo/academic/Q.pdf

  3. Normal Distribution Function – from Wolfram MathWorld http://mathworld.wolfram.com/NormalDistributionFunction.html

  4. "Basic properties of the Q-function" (PDF). 2009-03-05. Archived from the original (PDF) on 2009-03-25. https://web.archive.org/web/20090325160012/http://www.eng.tau.ac.il/~jo/academic/Q.pdf

  5. Craig, J.W. (1991). "A new, simple and exact result for calculating the probability of error for two-dimensional signal constellations" (PDF). MILCOM 91 - Conference record. pp. 571–575. doi:10.1109/MILCOM.1991.258319. ISBN 0-87942-691-8. S2CID 16034807. 0-87942-691-8

  6. Behnad, Aydin (2020). "A Novel Extension to Craig's Q-Function Formula and Its Application in Dual-Branch EGC Performance Analysis". IEEE Transactions on Communications. 68 (7): 4117–4125. doi:10.1109/TCOMM.2020.2986209. S2CID 216500014. /wiki/Doi_(identifier)

  7. Gordon, R.D. (1941). "Values of Mills' ratio of area to bounding ordinate and of the normal probability integral for large values of the argument". Ann. Math. Stat. 12 (3): 364–366. doi:10.1214/aoms/1177731721. /wiki/Doi_(identifier)

  8. Borjesson, P.; Sundberg, C.-E. (1979). "Simple Approximations of the Error Function Q(x) for Communications Applications". IEEE Transactions on Communications. 27 (3): 639–643. doi:10.1109/TCOM.1979.1094433. /wiki/Doi_(identifier)

  9. Borjesson, P.; Sundberg, C.-E. (1979). "Simple Approximations of the Error Function Q(x) for Communications Applications". IEEE Transactions on Communications. 27 (3): 639–643. doi:10.1109/TCOM.1979.1094433. /wiki/Doi_(identifier)

  10. Chiani, M.; Dardari, D.; Simon, M.K. (2003). "New exponential bounds and approximations for the computation of error probability in fading channels" (PDF). IEEE Transactions on Wireless Communications. 24 (5): 840–845. doi:10.1109/TWC.2003.814350. http://campus.unibo.it/85943/1/mcddmsTranWIR2003.pdf

  11. Tanash, I.M.; Riihonen, T. (2020). "Global minimax approximations and bounds for the Gaussian Q-function by sums of exponentials". IEEE Transactions on Communications. 68 (10): 6514–6524. arXiv:2007.06939. doi:10.1109/TCOMM.2020.3006902. S2CID 220514754. /wiki/ArXiv_(identifier)

  12. Tanash, I.M.; Riihonen, T. (2020). "Coefficients for Global Minimax Approximations and Bounds for the Gaussian Q-Function by Sums of Exponentials [Data set]". Zenodo. doi:10.5281/zenodo.4112978. https://zenodo.org/record/4112978

  13. Karagiannidis, George; Lioumpas, Athanasios (2007). "An Improved Approximation for the Gaussian Q-Function" (PDF). IEEE Communications Letters. 11 (8): 644–646. doi:10.1109/LCOMM.2007.070470. S2CID 4043576. http://users.auth.gr/users/9/3/028239/public_html/pdf/Q_Approxim.pdf

  14. Tanash, I.M.; Riihonen, T. (2021). "Improved coefficients for the Karagiannidis–Lioumpas approximations and bounds to the Gaussian Q-function". IEEE Communications Letters. 25 (5): 1468–1471. arXiv:2101.07631. doi:10.1109/LCOMM.2021.3052257. S2CID 231639206. /wiki/ArXiv_(identifier)

  15. Lopez-Benitez, Miguel; Casadevall, Fernando (2011). "Versatile, Accurate, and Analytically Tractable Approximation for the Gaussian Q-Function" (PDF). IEEE Transactions on Communications. 59 (4): 917–922. doi:10.1109/TCOMM.2011.012711.100105. S2CID 1145101. http://www.lopezbenitez.es/journals/IEEE_TCOM_2011.pdf

  16. Abreu, Giuseppe (2012). "Very Simple Tight Bounds on the Q-Function". IEEE Transactions on Communications. 60 (9): 2415–2420. doi:10.1109/TCOMM.2012.080612.110075. /wiki/Doi_(identifier)

  17. Savage, I. R. (1962). "Mills ratio for multivariate normal distributions". Journal of Research of the National Bureau of Standards Section B. 66 (3): 93–96. doi:10.6028/jres.066B.011. Zbl 0105.12601. https://doi.org/10.6028%2Fjres.066B.011

  18. Botev, Z. I. (2016). "The normal law under linear restrictions: simulation and estimation via minimax tilting". Journal of the Royal Statistical Society, Series B. 79: 125–148. arXiv:1603.04166. Bibcode:2016arXiv160304166B. doi:10.1111/rssb.12162. S2CID 88515228. /wiki/ArXiv_(identifier)

  19. Botev, Z. I.; Mackinlay, D.; Chen, Y.-L. (2017). "Logarithmically efficient estimation of the tail of the multivariate normal distribution". 2017 Winter Simulation Conference (WSC). IEEE. pp. 1903–191. doi:10.1109/WSC.2017.8247926. ISBN 978-1-5386-3428-8. S2CID 4626481. 978-1-5386-3428-8