In statistics, the Q-function is the tail distribution function of the standard normal distribution. In other words, Q ( x ) {\displaystyle Q(x)} is the probability that a normal (Gaussian) random variable will obtain a value larger than x {\displaystyle x} standard deviations. Equivalently, Q ( x ) {\displaystyle Q(x)} is the probability that a standard normal random variable takes a value larger than x {\displaystyle x} .
If Y {\displaystyle Y} is a Gaussian random variable with mean μ {\displaystyle \mu } and variance σ 2 {\displaystyle \sigma ^{2}} , then X = Y − μ σ {\displaystyle X={\frac {Y-\mu }{\sigma }}} is standard normal and
P ( Y > y ) = P ( X > x ) = Q ( x ) {\displaystyle P(Y>y)=P(X>x)=Q(x)}where x = y − μ σ {\displaystyle x={\frac {y-\mu }{\sigma }}} .
Other definitions of the Q-function, all of which are simple transformations of the normal cumulative distribution function, are also used occasionally.
Because of its relation to the cumulative distribution function of the normal distribution, the Q-function can also be expressed in terms of the error function, which is an important function in applied mathematics and physics.
Definition and basic properties
Formally, the Q-function is defined as
Q ( x ) = 1 2 π ∫ x ∞ exp ( − u 2 2 ) d u . {\displaystyle Q(x)={\frac {1}{\sqrt {2\pi }}}\int _{x}^{\infty }\exp \left(-{\frac {u^{2}}{2}}\right)\,du.}Thus,
Q ( x ) = 1 − Q ( − x ) = 1 − Φ ( x ) , {\displaystyle Q(x)=1-Q(-x)=1-\Phi (x)\,\!,}where Φ ( x ) {\displaystyle \Phi (x)} is the cumulative distribution function of the standard normal Gaussian distribution.
The Q-function can be expressed in terms of the error function, or the complementary error function, as4
Q ( x ) = 1 2 ( 2 π ∫ x / 2 ∞ exp ( − t 2 ) d t ) = 1 2 − 1 2 erf ( x 2 ) -or- = 1 2 erfc ( x 2 ) . {\displaystyle {\begin{aligned}Q(x)&={\frac {1}{2}}\left({\frac {2}{\sqrt {\pi }}}\int _{x/{\sqrt {2}}}^{\infty }\exp \left(-t^{2}\right)\,dt\right)\\&={\frac {1}{2}}-{\frac {1}{2}}\operatorname {erf} \left({\frac {x}{\sqrt {2}}}\right)~~{\text{ -or-}}\\&={\frac {1}{2}}\operatorname {erfc} \left({\frac {x}{\sqrt {2}}}\right).\end{aligned}}}An alternative form of the Q-function known as Craig's formula, after its discoverer, is expressed as:5
Q ( x ) = 1 π ∫ 0 π 2 exp ( − x 2 2 sin 2 θ ) d θ . {\displaystyle Q(x)={\frac {1}{\pi }}\int _{0}^{\frac {\pi }{2}}\exp \left(-{\frac {x^{2}}{2\sin ^{2}\theta }}\right)d\theta .}This expression is valid only for positive values of x, but it can be used in conjunction with Q(x) = 1 − Q(−x) to obtain Q(x) for negative values. This form is advantageous in that the range of integration is fixed and finite.
Craig's formula was later extended by Behnad (2020)6 for the Q-function of the sum of two non-negative variables, as follows:
Q ( x + y ) = 1 π ∫ 0 π 2 exp ( − x 2 2 sin 2 θ − y 2 2 cos 2 θ ) d θ , x , y ⩾ 0. {\displaystyle Q(x+y)={\frac {1}{\pi }}\int _{0}^{\frac {\pi }{2}}\exp \left(-{\frac {x^{2}}{2\sin ^{2}\theta }}-{\frac {y^{2}}{2\cos ^{2}\theta }}\right)d\theta ,\quad x,y\geqslant 0.}Bounds and approximations
- The Q-function is not an elementary function. However, it can be upper and lower bounded as,78
- Tighter bounds and approximations of Q ( x ) {\displaystyle Q(x)} can also be obtained by optimizing the following expression 9
- The Chernoff bound of the Q-function is
- Improved exponential bounds and a pure exponential approximation are 10
- The above were generalized by Tanash & Riihonen (2020),11 who showed that Q ( x ) {\displaystyle Q(x)} can be accurately approximated or bounded by
- Another approximation of Q ( x ) {\displaystyle Q(x)} for x ∈ [ 0 , ∞ ) {\displaystyle x\in [0,\infty )} is given by Karagiannidis & Lioumpas (2007)13 who showed for the appropriate choice of parameters { A , B } {\displaystyle \{A,B\}} that
- A tighter and more tractable approximation of Q ( x ) {\displaystyle Q(x)} for positive arguments Failed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle x \in [0,\infty)} is given by López-Benítez & Casadevall (2011)15 based on a second-order exponential function:
- A pair of tight lower and upper bounds on the Gaussian Q-function for positive arguments x ∈ [ 0 , ∞ ) {\displaystyle x\in [0,\infty )} was introduced by Abreu (2012)16 based on a simple algebraic expression with only two exponential terms:
These bounds are derived from a unified form Q B ( x ; a , b ) = exp ( − x 2 ) a + exp ( − x 2 / 2 ) b ( x + 1 ) {\displaystyle Q_{\mathrm {B} }(x;a,b)={\frac {\exp(-x^{2})}{a}}+{\frac {\exp(-x^{2}/2)}{b(x+1)}}} , where the parameters a {\displaystyle a} and b {\displaystyle b} are chosen to satisfy specific conditions ensuring the lower ( a L = 12 {\displaystyle a_{\mathrm {L} }=12} , b L = 2 π {\displaystyle b_{\mathrm {L} }={\sqrt {2\pi }}} ) and upper ( a U = 50 {\displaystyle a_{\mathrm {U} }=50} , b U = 2 {\displaystyle b_{\mathrm {U} }=2} ) bounding properties. The resulting expressions are notable for their simplicity and tightness, offering a favorable trade-off between accuracy and mathematical tractability. These bounds are particularly useful in theoretical analysis, such as in communication theory over fading channels. Additionally, they can be extended to bound Q n ( x ) {\displaystyle Q^{n}(x)} for positive integers n {\displaystyle n} using the binomial theorem, maintaining their simplicity and effectiveness.
Inverse Q
The inverse Q-function can be related to the inverse error functions:
Q − 1 ( y ) = 2 e r f − 1 ( 1 − 2 y ) = 2 e r f c − 1 ( 2 y ) {\displaystyle Q^{-1}(y)={\sqrt {2}}\ \mathrm {erf} ^{-1}(1-2y)={\sqrt {2}}\ \mathrm {erfc} ^{-1}(2y)}The function Q − 1 ( y ) {\displaystyle Q^{-1}(y)} finds application in digital communications. It is usually expressed in dB and generally called Q-factor:
Q - f a c t o r = 20 log 10 ( Q − 1 ( y ) ) d B {\displaystyle \mathrm {Q{\text{-}}factor} =20\log _{10}\!\left(Q^{-1}(y)\right)\!~\mathrm {dB} }where y is the bit-error rate (BER) of the digitally modulated signal under analysis. For instance, for quadrature phase-shift keying (QPSK) in additive white Gaussian noise, the Q-factor defined above coincides with the value in dB of the signal to noise ratio that yields a bit error rate equal to y.
Values
The Q-function is well tabulated and can be computed directly in most of the mathematical software packages such as R and those available in Python, MATLAB and Mathematica. Some values of the Q-function are given below for reference.
|
|
|
|
Generalization to high dimensions
The Q-function can be generalized to higher dimensions:17
Q ( x ) = P ( X ≥ x ) , {\displaystyle Q(\mathbf {x} )=\mathbb {P} (\mathbf {X} \geq \mathbf {x} ),}where X ∼ N ( 0 , Σ ) {\displaystyle \mathbf {X} \sim {\mathcal {N}}(\mathbf {0} ,\,\Sigma )} follows the multivariate normal distribution with covariance Σ {\displaystyle \Sigma } and the threshold is of the form x = γ Σ l ∗ {\displaystyle \mathbf {x} =\gamma \Sigma \mathbf {l} ^{*}} for some positive vector l ∗ > 0 {\displaystyle \mathbf {l} ^{*}>\mathbf {0} } and positive constant γ > 0 {\displaystyle \gamma >0} . As in the one dimensional case, there is no simple analytical formula for the Q-function. Nevertheless, the Q-function can be approximated arbitrarily well as γ {\displaystyle \gamma } becomes larger and larger.1819
References
"The Q-function". cnx.org. Archived from the original on 2012-02-29. https://web.archive.org/web/20120229030808/http://cnx.org/content/m11537/latest/ ↩
"Basic properties of the Q-function" (PDF). 2009-03-05. Archived from the original (PDF) on 2009-03-25. https://web.archive.org/web/20090325160012/http://www.eng.tau.ac.il/~jo/academic/Q.pdf ↩
Normal Distribution Function – from Wolfram MathWorld http://mathworld.wolfram.com/NormalDistributionFunction.html ↩
"Basic properties of the Q-function" (PDF). 2009-03-05. Archived from the original (PDF) on 2009-03-25. https://web.archive.org/web/20090325160012/http://www.eng.tau.ac.il/~jo/academic/Q.pdf ↩
Craig, J.W. (1991). "A new, simple and exact result for calculating the probability of error for two-dimensional signal constellations" (PDF). MILCOM 91 - Conference record. pp. 571–575. doi:10.1109/MILCOM.1991.258319. ISBN 0-87942-691-8. S2CID 16034807. 0-87942-691-8 ↩
Behnad, Aydin (2020). "A Novel Extension to Craig's Q-Function Formula and Its Application in Dual-Branch EGC Performance Analysis". IEEE Transactions on Communications. 68 (7): 4117–4125. doi:10.1109/TCOMM.2020.2986209. S2CID 216500014. /wiki/Doi_(identifier) ↩
Gordon, R.D. (1941). "Values of Mills' ratio of area to bounding ordinate and of the normal probability integral for large values of the argument". Ann. Math. Stat. 12 (3): 364–366. doi:10.1214/aoms/1177731721. /wiki/Doi_(identifier) ↩
Borjesson, P.; Sundberg, C.-E. (1979). "Simple Approximations of the Error Function Q(x) for Communications Applications". IEEE Transactions on Communications. 27 (3): 639–643. doi:10.1109/TCOM.1979.1094433. /wiki/Doi_(identifier) ↩
Borjesson, P.; Sundberg, C.-E. (1979). "Simple Approximations of the Error Function Q(x) for Communications Applications". IEEE Transactions on Communications. 27 (3): 639–643. doi:10.1109/TCOM.1979.1094433. /wiki/Doi_(identifier) ↩
Chiani, M.; Dardari, D.; Simon, M.K. (2003). "New exponential bounds and approximations for the computation of error probability in fading channels" (PDF). IEEE Transactions on Wireless Communications. 24 (5): 840–845. doi:10.1109/TWC.2003.814350. http://campus.unibo.it/85943/1/mcddmsTranWIR2003.pdf ↩
Tanash, I.M.; Riihonen, T. (2020). "Global minimax approximations and bounds for the Gaussian Q-function by sums of exponentials". IEEE Transactions on Communications. 68 (10): 6514–6524. arXiv:2007.06939. doi:10.1109/TCOMM.2020.3006902. S2CID 220514754. /wiki/ArXiv_(identifier) ↩
Tanash, I.M.; Riihonen, T. (2020). "Coefficients for Global Minimax Approximations and Bounds for the Gaussian Q-Function by Sums of Exponentials [Data set]". Zenodo. doi:10.5281/zenodo.4112978. https://zenodo.org/record/4112978 ↩
Karagiannidis, George; Lioumpas, Athanasios (2007). "An Improved Approximation for the Gaussian Q-Function" (PDF). IEEE Communications Letters. 11 (8): 644–646. doi:10.1109/LCOMM.2007.070470. S2CID 4043576. http://users.auth.gr/users/9/3/028239/public_html/pdf/Q_Approxim.pdf ↩
Tanash, I.M.; Riihonen, T. (2021). "Improved coefficients for the Karagiannidis–Lioumpas approximations and bounds to the Gaussian Q-function". IEEE Communications Letters. 25 (5): 1468–1471. arXiv:2101.07631. doi:10.1109/LCOMM.2021.3052257. S2CID 231639206. /wiki/ArXiv_(identifier) ↩
Lopez-Benitez, Miguel; Casadevall, Fernando (2011). "Versatile, Accurate, and Analytically Tractable Approximation for the Gaussian Q-Function" (PDF). IEEE Transactions on Communications. 59 (4): 917–922. doi:10.1109/TCOMM.2011.012711.100105. S2CID 1145101. http://www.lopezbenitez.es/journals/IEEE_TCOM_2011.pdf ↩
Abreu, Giuseppe (2012). "Very Simple Tight Bounds on the Q-Function". IEEE Transactions on Communications. 60 (9): 2415–2420. doi:10.1109/TCOMM.2012.080612.110075. /wiki/Doi_(identifier) ↩
Savage, I. R. (1962). "Mills ratio for multivariate normal distributions". Journal of Research of the National Bureau of Standards Section B. 66 (3): 93–96. doi:10.6028/jres.066B.011. Zbl 0105.12601. https://doi.org/10.6028%2Fjres.066B.011 ↩
Botev, Z. I. (2016). "The normal law under linear restrictions: simulation and estimation via minimax tilting". Journal of the Royal Statistical Society, Series B. 79: 125–148. arXiv:1603.04166. Bibcode:2016arXiv160304166B. doi:10.1111/rssb.12162. S2CID 88515228. /wiki/ArXiv_(identifier) ↩
Botev, Z. I.; Mackinlay, D.; Chen, Y.-L. (2017). "Logarithmically efficient estimation of the tail of the multivariate normal distribution". 2017 Winter Simulation Conference (WSC). IEEE. pp. 1903–191. doi:10.1109/WSC.2017.8247926. ISBN 978-1-5386-3428-8. S2CID 4626481. 978-1-5386-3428-8 ↩