Regular estimators are a class of statistical estimators that satisfy certain regularity conditions which make them amenable to asymptotic analysis. The convergence of a regular estimator's distribution is, in a sense, locally uniform. This is often considered desirable and leads to the convenient property that a small change in the parameter does not dramatically change the distribution of the estimator.
Definition
An estimator θ ^ n {\displaystyle {\hat {\theta }}_{n}} of ψ ( θ ) {\displaystyle \psi (\theta )} based on a sample of size n {\displaystyle n} is said to be regular if for every h {\displaystyle h} :2
n ( θ ^ n − ψ ( θ + h / n ) ) → θ + h / n L θ {\displaystyle {\sqrt {n}}\left({\hat {\theta }}_{n}-\psi (\theta +h/{\sqrt {n}})\right){\stackrel {\theta +h/{\sqrt {n}}}{\rightarrow }}L_{\theta }}
where the convergence is in distribution under the law of θ + h / n {\displaystyle \theta +h/{\sqrt {n}}} . L θ {\displaystyle L_{\theta }} is some asymptotic distribution (usually this is a normal distribution with mean zero and variance which may depend on θ {\displaystyle \theta } ).
Examples of non-regular estimators
Both the Hodges' estimator3 and the James-Stein estimator4 are non-regular estimators when the population parameter θ {\displaystyle \theta } is exactly 0.
See also
References
Vaart AW van der. Asymptotic Statistics. Cambridge University Press; 1998. ↩
Vaart AW van der. Asymptotic Statistics. Cambridge University Press; 1998. ↩
Vaart AW van der. Asymptotic Statistics. Cambridge University Press; 1998. ↩
Beran, R. (1995). THE ROLE OF HAJEK'S CONVOLUTION THEOREM IN STATISTICAL THEORY ↩