In statistics, an approximate entropy (ApEn) is a technique used to quantify the amount of regularity and the unpredictability of fluctuations over time-series data. For example, consider two series of data:
Moment statistics, such as mean and variance, will not distinguish between these two series. Nor will rank order statistics distinguish between these series. Yet series A is perfectly regular: knowing a term has the value of 1 enables one to predict with certainty that the next term will have the value of 0. In contrast, series B is randomly valued: knowing a term has the value of 1 gives no insight into what value the next term will have.
Regularity was originally measured by exact regularity statistics, which has mainly centered on various entropy measures. However, accurate entropy calculation requires vast amounts of data, and the results will be greatly influenced by system noise, therefore it is not practical to apply these methods to experimental data. ApEn was first proposed (under a different name) by Aviad Cohen and Itamar Procaccia, as an approximate algorithm to compute an exact regularity statistic, Kolmogorov–Sinai entropy, and later popularized by Steve M. Pincus. ApEn was initially used to analyze chaotic dynamics and medical data, such as heart rate, and later spread its applications in finance, physiology, human factors engineering, and climate sciences.