The characteristic function is a way to describe a random variable X. The characteristic function,
a function of t, determines the behavior and properties of the probability distribution of X. It is equivalent to a probability density function or cumulative distribution function, since knowing one of these functions allows computation of the others, but they provide different insights into the features of the random variable. In particular cases, one or another of these equivalent functions may be easier to represent in terms of simple standard functions.
If a random variable admits a density function, then the characteristic function is its Fourier dual, in the sense that each of them is a Fourier transform of the other. If a random variable has a moment-generating function M X ( t ) {\displaystyle M_{X}(t)} , then the domain of the characteristic function can be extended to the complex plane, and
Note however that the characteristic function of a distribution is well defined for all real values of t, even when the moment-generating function is not well defined for all real values of t.
The characteristic function approach is particularly useful in analysis of linear combinations of independent random variables: a classical proof of the Central Limit Theorem uses characteristic functions and Lévy's continuity theorem. Another important application is to the theory of the decomposability of random variables.
For a scalar random variable X the characteristic function is defined as the expected value of eitX, where i is the imaginary unit, and t ∈ R is the argument of the characteristic function:
Here FX is the cumulative distribution function of X, fX is the corresponding probability density function, QX(p) is the corresponding inverse cumulative distribution function also called the quantile function,2 and the integrals are of the Riemann–Stieltjes kind. If a random variable X has a probability density function then the characteristic function is its Fourier transform with sign reversal in the complex exponential.34 This convention for the constants appearing in the definition of the characteristic function differs from the usual convention for the Fourier transform.5 For example, some authors6 define φX(t) = E[e−2πitX], which is essentially a change of parameter. Other notation may be encountered in the literature: p ^ {\displaystyle \scriptstyle {\hat {p}}} as the characteristic function for a probability measure p, or f ^ {\displaystyle \scriptstyle {\hat {f}}} as the characteristic function corresponding to a density f.
The notion of characteristic functions generalizes to multivariate random variables and more complicated random elements. The argument of the characteristic function will always belong to the continuous dual of the space where the random variable X takes its values. For common cases such definitions are listed below:
Oberhettinger (1973) provides extensive tables of characteristic functions.
The bijection stated above between probability distributions and characteristic functions is sequentially continuous. That is, whenever a sequence of distribution functions Fj(x) converges (weakly) to some distribution F(x), the corresponding sequence of characteristic functions φj(t) will also converge, and the limit φ(t) will correspond to the characteristic function of law F. More formally, this is stated as
This theorem can be used to prove the law of large numbers and the central limit theorem.
There is a one-to-one correspondence between cumulative distribution functions and characteristic functions, so it is possible to find one of these functions if we know the other. The formula in the definition of characteristic function allows us to compute φ when we know the distribution function F (or density f). If, on the other hand, we know the characteristic function φ and want to find the corresponding distribution function, then one of the following inversion theorems can be used.
Theorem. If the characteristic function φX of a random variable X is integrable, then FX is absolutely continuous, and therefore X has a probability density function. In the univariate case (i.e. when X is scalar-valued) the density function is given by f X ( x ) = F X ′ ( x ) = 1 2 π ∫ R e − i t x φ X ( t ) d t . {\displaystyle f_{X}(x)=F_{X}'(x)={\frac {1}{2\pi }}\int _{\mathbf {R} }e^{-itx}\varphi _{X}(t)\,dt.}
In the multivariate case it is f X ( x ) = 1 ( 2 π ) n ∫ R n e − i ( t ⋅ x ) φ X ( t ) λ ( d t ) {\displaystyle f_{X}(x)={\frac {1}{(2\pi )^{n}}}\int _{\mathbf {R} ^{n}}e^{-i(t\cdot x)}\varphi _{X}(t)\lambda (dt)}
where t ⋅ x {\textstyle t\cdot x} is the dot product.
The density function is the Radon–Nikodym derivative of the distribution μX with respect to the Lebesgue measure λ: f X ( x ) = d μ X d λ ( x ) . {\displaystyle f_{X}(x)={\frac {d\mu _{X}}{d\lambda }}(x).}
Theorem (Lévy).14 If φX is characteristic function of distribution function FX, two points a < b are such that {x | a < x < b} is a continuity set of μX (in the univariate case this condition is equivalent to continuity of FX at points a and b), then
Theorem. If a is (possibly) an atom of X (in the univariate case this means a point of discontinuity of FX) then
Theorem (Gil-Pelaez).18 For a univariate random variable X, if x is a continuity point of FX then
where the imaginary part of a complex number z {\displaystyle z} is given by I m ( z ) = ( z − z ∗ ) / 2 i {\displaystyle \mathrm {Im} (z)=(z-z^{*})/2i} .
And its density function is:
The integral may be not Lebesgue-integrable; for example, when X is the discrete random variable that is always 0, it becomes the Dirichlet integral.
Inversion formulas for multivariate distributions are available.1920
The set of all characteristic functions is closed under certain operations:
It is well known that any non-decreasing càdlàg function F with limits F(−∞) = 0, F(+∞) = 1 corresponds to a cumulative distribution function of some random variable. There is also interest in finding similar simple criteria for when a given function φ could be the characteristic function of some random variable. The central result here is Bochner’s theorem, although its usefulness is limited because the main condition of the theorem, non-negative definiteness, is very hard to verify. Other theorems also exist, such as Khinchine’s, Mathias’s, or Cramér’s, although their application is just as difficult. Pólya’s theorem, on the other hand, provides a very simple convexity condition which is sufficient but not necessary. Characteristic functions which satisfy this condition are called Pólya-type.21
Bochner’s theorem. An arbitrary function φ : Rn → C is the characteristic function of some random variable if and only if φ is positive definite, continuous at the origin, and if φ(0) = 1.
Khinchine’s criterion. A complex-valued, absolutely continuous function φ, with φ(0) = 1, is a characteristic function if and only if it admits the representation
Mathias’ theorem. A real-valued, even, continuous, absolutely integrable function φ, with φ(0) = 1, is a characteristic function if and only if
for n = 0,1,2,..., and all p > 0. Here H2n denotes the Hermite polynomial of degree 2n.
Pólya’s theorem. If φ {\displaystyle \varphi } is a real-valued, even, continuous function which satisfies the conditions
then φ(t) is the characteristic function of an absolutely continuous distribution symmetric about 0.
Because of the continuity theorem, characteristic functions are used in the most frequently seen proof of the central limit theorem. The main technique involved in making calculations with a characteristic function is recognizing the function as the characteristic function of a particular distribution.
Characteristic functions are particularly useful for dealing with linear functions of independent random variables. For example, if X1, X2, ..., Xn is a sequence of independent (and not necessarily identically distributed) random variables, and
where the ai are constants, then the characteristic function for Sn is given by
In particular, φX+Y(t) = φX(t)φY(t). To see this, write out the definition of characteristic function:
The independence of X and Y is required to establish the equality of the third and fourth expressions.
Another special case of interest for identically distributed random variables is when ai = 1 / n and then Sn is the sample mean. In this case, writing X for the mean,
Characteristic functions can also be used to find moments of a random variable. Provided that the n-th moment exists, the characteristic function can be differentiated n times:
E [ X n ] = i − n [ d n d t n φ X ( t ) ] t = 0 = i − n φ X ( n ) ( 0 ) , {\displaystyle \operatorname {E} \left[X^{n}\right]=i^{-n}\left[{\frac {d^{n}}{dt^{n}}}\varphi _{X}(t)\right]_{t=0}=i^{-n}\varphi _{X}^{(n)}(0),\!}
This can be formally written using the derivatives of the Dirac delta function: f X ( x ) = ∑ n = 0 ∞ ( − 1 ) n n ! δ ( n ) ( x ) E [ X n ] {\displaystyle f_{X}(x)=\sum _{n=0}^{\infty }{\frac {(-1)^{n}}{n!}}\delta ^{(n)}(x)\operatorname {E} [X^{n}]} which allows a formal solution to the moment problem. For example, suppose X has a standard Cauchy distribution. Then φX(t) = e−|t|. This is not differentiable at t = 0, showing that the Cauchy distribution has no expectation. Also, the characteristic function of the sample mean X of n independent observations has characteristic function φX(t) = (e−|t|/n)n = e−|t|, using the result from the previous section. This is the characteristic function of the standard Cauchy distribution: thus, the sample mean has the same distribution as the population itself.
As a further example, suppose X follows a Gaussian distribution i.e. X ∼ N ( μ , σ 2 ) {\displaystyle X\sim {\mathcal {N}}(\mu ,\sigma ^{2})} . Then φ X ( t ) = e μ i t − 1 2 σ 2 t 2 {\displaystyle \varphi _{X}(t)=e^{\mu it-{\frac {1}{2}}\sigma ^{2}t^{2}}} and
A similar calculation shows E [ X 2 ] = μ 2 + σ 2 {\displaystyle \operatorname {E} \left[X^{2}\right]=\mu ^{2}+\sigma ^{2}} and is easier to carry out than applying the definition of expectation and using integration by parts to evaluate E [ X 2 ] {\displaystyle \operatorname {E} \left[X^{2}\right]} .
The logarithm of a characteristic function is a cumulant generating function, which is useful for finding cumulants; some instead define the cumulant generating function as the logarithm of the moment-generating function, and call the logarithm of the characteristic function the second cumulant generating function.
Characteristic functions can be used as part of procedures for fitting probability distributions to samples of data. Cases where this provides a practicable option compared to other possibilities include fitting the stable distribution since closed form expressions for the density are not available which makes implementation of maximum likelihood estimation difficult. Estimation procedures are available which match the theoretical characteristic function to the empirical characteristic function, calculated from the data. Paulson et al. (1975)22 and Heathcote (1977)23 provide some theoretical background for such an estimation procedure. In addition, Yu (2004)24 describes applications of empirical characteristic functions to fit time series models where likelihood procedures are impractical. Empirical characteristic functions have also been used by Ansari et al. (2020)25 and Li et al. (2020)26 for training generative adversarial networks.
The gamma distribution with scale parameter θ and a shape parameter k has the characteristic function
Now suppose that we have
with X and Y independent from each other, and we wish to know what the distribution of X + Y is. The characteristic functions are
which by independence and the basic properties of characteristic function leads to
This is the characteristic function of the gamma distribution scale parameter θ and shape parameter k1 + k2, and we therefore conclude
The result can be expanded to n independent gamma distributed random variables with the same scale parameter and we get
As defined above, the argument of the characteristic function is treated as a real number: however, certain aspects of the theory of characteristic functions are advanced by extending the definition into the complex plane by analytic continuation, in cases where this is possible.27
Related concepts include the moment-generating function and the probability-generating function. The characteristic function exists for all probability distributions. This is not the case for the moment-generating function.
The characteristic function is closely related to the Fourier transform: the characteristic function of a probability density function p(x) is the complex conjugate of the continuous Fourier transform of p(x) (according to the usual convention; see continuous Fourier transform – other conventions).
where P(t) denotes the continuous Fourier transform of the probability density function p(x). Likewise, p(x) may be recovered from φX(t) through the inverse Fourier transform:
Indeed, even when the random variable does not have a density, the characteristic function may be seen as the Fourier transform of the measure corresponding to the random variable.
Another related concept is the representation of probability distributions as elements of a reproducing kernel Hilbert space via the kernel embedding of distributions. This framework may be viewed as a generalization of the characteristic function under specific choices of the kernel function.
Lukacs (1970), p. 196. - Lukacs, E. (1970). Characteristic functions. London: Griffin. ↩
Shaw, W. T.; McCabe, J. (2009). "Monte Carlo sampling given a Characteristic Function: Quantile Mechanics in Momentum Space". arXiv:0903.1592 [q-fin.CP]. /wiki/ArXiv_(identifier) ↩
Statistical and Adaptive Signal Processing (2005), p. 79 - Manolakis, Dimitris G.; Ingle, Vinay K.; Kogon, Stephen M. (2005). Statistical and Adaptive Signal Processing: Spectral Estimation, Signal Modeling, Adaptive Filtering, and Array Processing. Artech House. ISBN 978-1-58053-610-3. https://books.google.com/books?id=3RQfAQAAIAAJ ↩
Billingsley (1995), p. 345. - Billingsley, Patrick (1995). Probability and measure (3rd ed.). John Wiley & Sons. ISBN 978-0-471-00710-4. ↩
Pinsky (2002). - Pinsky, Mark (2002). Introduction to Fourier analysis and wavelets. Brooks/Cole. ISBN 978-0-534-37660-4. ↩
Bochner (1955). - Bochner, Salomon (1955). Harmonic analysis and the theory of probability. University of California Press. ↩
Andersen et al. (1995), Definition 1.10. - Andersen, H.H.; Højbjerre, M.; Sørensen, D.; Eriksen, P.S. (1995). Linear and graphical models for the multivariate complex normal distribution. Lecture Notes in Statistics 101. New York: Springer-Verlag. ISBN 978-0-387-94521-7. ↩
Andersen et al. (1995), Definition 1.20. - Andersen, H.H.; Højbjerre, M.; Sørensen, D.; Eriksen, P.S. (1995). Linear and graphical models for the multivariate complex normal distribution. Lecture Notes in Statistics 101. New York: Springer-Verlag. ISBN 978-0-387-94521-7. ↩
Sobczyk (2001), p. 20. - Sobczyk, Kazimierz (2001). Stochastic differential equations. Kluwer Academic Publishers. ISBN 978-1-4020-0345-5. ↩
Kotz & Nadarajah (2004), p. 37 using 1 as the number of degree of freedom to recover the Cauchy distribution - Kotz, Samuel; Nadarajah, Saralees (2004). Multivariate T Distributions and Their Applications. Cambridge University Press. ↩
Lukacs (1970), Corollary 1 to Theorem 2.3.1. - Lukacs, E. (1970). Characteristic functions. London: Griffin. ↩
"Joint characteristic function". www.statlect.com. Retrieved 7 April 2018. https://www.statlect.com/fundamentals-of-probability/joint-characteristic-function ↩
Cuppens (1975), Theorem 2.6.9. - Cuppens, R. (1975). Decomposition of multivariate probabilities. Academic Press. ISBN 9780121994501. https://archive.org/details/decompositionofm00cupp ↩
named after the French mathematician Paul Lévy /wiki/Paul_L%C3%A9vy_(mathematician) ↩
Shephard (1991a). - Shephard, N. G. (1991a). "From characteristic function to distribution function: A simple framework for the theory". Econometric Theory. 7 (4): 519–529. doi:10.1017/s0266466600004746. S2CID 14668369. https://ora.ox.ac.uk/objects/uuid:a4c3ad11-74fe-458c-8d58-6f74511a476c ↩
Cuppens (1975), Theorem 2.3.2. - Cuppens, R. (1975). Decomposition of multivariate probabilities. Academic Press. ISBN 9780121994501. https://archive.org/details/decompositionofm00cupp ↩
Wendel (1961). - Wendel, J.G. (1961). "The non-absolute convergence of Gil-Pelaez' inversion integral". The Annals of Mathematical Statistics. 32 (1): 338–339. doi:10.1214/aoms/1177705164. https://doi.org/10.1214%2Faoms%2F1177705164 ↩
Shephard (1991b). - Shephard, N. G. (1991b). "Numerical integration rules for multivariate inversions". Journal of Statistical Computation and Simulation. 39 (1–2): 37–46. doi:10.1080/00949659108811337. https://ora.ox.ac.uk/objects/uuid:da00666a-4790-4666-a54c-b81fc6fc49cb ↩
Lukacs (1970), p. 84. - Lukacs, E. (1970). Characteristic functions. London: Griffin. ↩
Paulson, Holcomb & Leitch (1975). - Paulson, A.S.; Holcomb, E.W.; Leitch, R.A. (1975). "The estimation of the parameters of the stable laws". Biometrika. 62 (1): 163–170. doi:10.1093/biomet/62.1.163. https://doi.org/10.1093%2Fbiomet%2F62.1.163 ↩
Heathcote (1977). - Heathcote, C.R. (1977). "The integrated squared error estimation of parameters". Biometrika. 64 (2): 255–264. doi:10.1093/biomet/64.2.255. https://doi.org/10.1093%2Fbiomet%2F64.2.255 ↩
Yu (2004). - Yu, J. (2004). "Empirical characteristic function estimation and its applications" (PDF). Econometric Reviews. 23 (2): 93–1223. doi:10.1081/ETC-120039605. S2CID 9076760. https://ink.library.smu.edu.sg/context/soe_research/article/1357/viewcontent/SSRN_id553701.pdf ↩
Ansari, Scarlett & Soh (2020). - Ansari, Abdul Fatir; Scarlett, Jonathan; Soh, Harold (2020). "A Characteristic Function Approach to Deep Implicit Generative Modeling". Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2020. pp. 7478–7487. https://openaccess.thecvf.com/content_CVPR_2020/html/Ansari_A_Characteristic_Function_Approach_to_Deep_Implicit_Generative_Modeling_CVPR_2020_paper.html ↩
Li et al. (2020). - Li, Shengxi; Yu, Zeyang; Xiang, Min; Mandic, Danilo (2020). "Reciprocal Adversarial Learning via Characteristic Functions". Advances in Neural Information Processing Systems 33 (NeurIPS 2020). https://proceedings.neurips.cc/paper/2020/hash/021f6dd88a11ca489936ae770e4634ad-Abstract.html ↩
Lukacs (1970), Chapter 7. - Lukacs, E. (1970). Characteristic functions. London: Griffin. ↩