The Gilbert–Elliott model is a simple channel model introduced by Edgar Gilbert2 and E. O. Elliott 3 that is widely used for describing burst error patterns in transmission channels and enables simulations of the digital error performance of communications links. It is based on a Markov chain with two states G (for good or gap) and B (for bad or burst). In state G the probability of transmitting a bit correctly is k and in state B it is h. Usually,4 it is assumed that k = 1. Gilbert provided equations for deriving the other three parameters (G and B state transition probabilities and h) from a given success/failure sequence. In his example, the sequence was too short to correctly find h (a negative probability was found) and so Gilbert assumed that h = 0.5.
Federal Standard 1037C http://www.its.bldrdoc.gov/fs-1037/fs-1037c.htm ↩
Gilbert, E. N. (1960), "Capacity of a burst-noise channel", Bell System Technical Journal, 39 (5): 1253–1265, doi:10.1002/j.1538-7305.1960.tb03959.x. /wiki/Edgar_Gilbert ↩
Elliott, E. O. (1963), "Estimates of error rates for codes on burst-noise channels", Bell System Technical Journal, 42 (5): 1977–1997, doi:10.1002/j.1538-7305.1963.tb00955.x. /wiki/Bell_System_Technical_Journal ↩
Lemmon, J.J.: Wireless link statistical bit error model. US National Telecommunications and Information Administration (NTIA) Report 02-394 (2002) ↩