Let (X1, …, Xn) be independent, identically distributed real random variables with the common cumulative distribution function F(t). Then the empirical distribution function is defined as2
where 1 A {\displaystyle \mathbf {1} _{A}} is the indicator of event A. For a fixed t, the indicator 1 X i ≤ t {\displaystyle \mathbf {1} _{X_{i}\leq t}} is a Bernoulli random variable with parameter p = F(t); hence n F ^ n ( t ) {\displaystyle n{\widehat {F}}_{n}(t)} is a binomial random variable with mean nF(t) and variance nF(t)(1 − F(t)). This implies that F ^ n ( t ) {\displaystyle {\widehat {F}}_{n}(t)} is an unbiased estimator for F(t).
However, in some textbooks, the definition is given as
Since the ratio (n + 1)/n approaches 1 as n goes to infinity, the asymptotic properties of the two definitions that are given above are the same.
By the strong law of large numbers, the estimator F ^ n ( t ) {\displaystyle \scriptstyle {\widehat {F}}_{n}(t)} converges to F(t) as n → ∞ almost surely, for every value of t:5
thus the estimator F ^ n ( t ) {\displaystyle \scriptstyle {\widehat {F}}_{n}(t)} is consistent. This expression asserts the pointwise convergence of the empirical distribution function to the true cumulative distribution function. There is a stronger result, called the Glivenko–Cantelli theorem, which states that the convergence in fact happens uniformly over t:6
The sup-norm in this expression is called the Kolmogorov–Smirnov statistic for testing the goodness-of-fit between the empirical distribution F ^ n ( t ) {\displaystyle \scriptstyle {\widehat {F}}_{n}(t)} and the assumed true cumulative distribution function F. Other norm functions may be reasonably used here instead of the sup-norm. For example, the L2-norm gives rise to the Cramér–von Mises statistic.
The asymptotic distribution can be further characterized in several different ways. First, the central limit theorem states that pointwise, F ^ n ( t ) {\displaystyle \scriptstyle {\widehat {F}}_{n}(t)} has asymptotically normal distribution with the standard n {\displaystyle {\sqrt {n}}} rate of convergence:7
This result is extended by the Donsker’s theorem, which asserts that the empirical process n ( F ^ n − F ) {\displaystyle \scriptstyle {\sqrt {n}}({\widehat {F}}_{n}-F)} , viewed as a function indexed by t ∈ R {\displaystyle \scriptstyle t\in \mathbb {R} } , converges in distribution in the Skorokhod space D [ − ∞ , + ∞ ] {\displaystyle \scriptstyle D[-\infty ,+\infty ]} to the mean-zero Gaussian process G F = B ∘ F {\displaystyle \scriptstyle G_{F}=B\circ F} , where B is the standard Brownian bridge.8 The covariance structure of this Gaussian process is
The uniform rate of convergence in Donsker’s theorem can be quantified by the result known as the Hungarian embedding:9
Alternatively, the rate of convergence of n ( F ^ n − F ) {\displaystyle \scriptstyle {\sqrt {n}}({\widehat {F}}_{n}-F)} can also be quantified in terms of the asymptotic behavior of the sup-norm of this expression. Number of results exist in this venue, for example the Dvoretzky–Kiefer–Wolfowitz inequality provides bound on the tail probabilities of n ‖ F ^ n − F ‖ ∞ {\displaystyle \scriptstyle {\sqrt {n}}\|{\widehat {F}}_{n}-F\|_{\infty }} :10
In fact, Kolmogorov has shown that if the cumulative distribution function F is continuous, then the expression n ‖ F ^ n − F ‖ ∞ {\displaystyle \scriptstyle {\sqrt {n}}\|{\widehat {F}}_{n}-F\|_{\infty }} converges in distribution to ‖ B ‖ ∞ {\displaystyle \scriptstyle \|B\|_{\infty }} , which has the Kolmogorov distribution that does not depend on the form of F.
Another result, which follows from the law of the iterated logarithm, is that 11
and
As per Dvoretzky–Kiefer–Wolfowitz inequality the interval that contains the true CDF, F ( x ) {\displaystyle F(x)} , with probability 1 − α {\displaystyle 1-\alpha } is specified as
As per the above bounds, we can plot the Empirical CDF, CDF and confidence intervals for different distributions by using any one of the statistical implementations.
A non-exhaustive list of software implementations of Empirical Distribution function includes:
A modern introduction to probability and statistics: Understanding why and how. Michel Dekking. London: Springer. 2005. p. 219. ISBN 978-1-85233-896-1. OCLC 262680588.{{cite book}}: CS1 maint: others (link) 978-1-85233-896-1 ↩
van der Vaart, A.W. (1998). Asymptotic statistics. Cambridge University Press. p. 265. ISBN 0-521-78450-6. 0-521-78450-6 ↩
Coles, S. (2001) An Introduction to Statistical Modeling of Extreme Values. Springer, p. 36, Definition 2.4. ISBN 978-1-4471-3675-0. /wiki/ISBN_(identifier) ↩
Madsen, H.O., Krenk, S., Lind, S.C. (2006) Methods of Structural Safety. Dover Publications. p. 148-149. ISBN 0486445976 /wiki/ISBN_(identifier) ↩
van der Vaart, A.W. (1998). Asymptotic statistics. Cambridge University Press. p. 266. ISBN 0-521-78450-6. 0-521-78450-6 ↩
van der Vaart, A.W. (1998). Asymptotic statistics. Cambridge University Press. p. 268. ISBN 0-521-78450-6. 0-521-78450-6 ↩
"What's new in Matplotlib 3.8.0 (Sept 13, 2023) — Matplotlib 3.8.3 documentation". https://matplotlib.org/stable/users/prev_whats_new/whats_new_3.8.0.html#axes-ecdf ↩