The channel coding theorem states that for any ε > 0 and for any rate less than the channel capacity, there is an encoding and decoding scheme that can be used to ensure that the probability of block error is less than ε > 0 for sufficiently long message block X. Also, for any rate greater than the channel capacity, the probability of block error at the receiver goes to one as the block length goes to infinity.
Assuming a channel coding setup as follows: the channel can transmit any of M = 2 n R {\displaystyle M=2^{nR}\;} messages, by transmitting the corresponding codeword (which is of length n). Each component in the codebook is drawn i.i.d. according to some probability distribution with probability mass function Q. At the decoding end, maximum likelihood decoding is done.
Let X i n {\displaystyle X_{i}^{n}} be the i {\displaystyle i} th random codeword in the codebook, where i {\displaystyle i} goes from 1 {\displaystyle 1} to M {\displaystyle M} . Suppose the first message is selected, so codeword X 1 n {\displaystyle X_{1}^{n}} is transmitted. Given that y 1 n {\displaystyle y_{1}^{n}} is received, the probability that the codeword is incorrectly detected as X 2 n {\displaystyle X_{2}^{n}} is:
The function 1 ( p ( y 1 n ∣ x 2 n ) > p ( y 1 n ∣ x 1 n ) ) {\displaystyle 1(p(y_{1}^{n}\mid x_{2}^{n})>p(y_{1}^{n}\mid x_{1}^{n}))} has upper bound
for s > 0 {\displaystyle s>0\;} Thus,
Since there are a total of M messages, and the entries in the codebook are i.i.d., the probability that X 1 n {\displaystyle X_{1}^{n}} is confused with any other message is M {\displaystyle M} times the above expression. Using the union bound, the probability of confusing X 1 n {\displaystyle X_{1}^{n}} with any message is bounded by:
for any 0 < ρ < 1 {\displaystyle 0<\rho <1} . Averaging over all combinations of x 1 n , y 1 n {\displaystyle x_{1}^{n},y_{1}^{n}} :
Choosing s = 1 − s ρ {\displaystyle s=1-s\rho } and combining the two sums over x 1 n {\displaystyle x_{1}^{n}} in the above formula:
Using the independence nature of the elements of the codeword, and the discrete memoryless nature of the channel:
Using the fact that each element of codeword is identically distributed and thus stationary:
Replacing M by 2nR and defining
probability of error becomes
Q and ρ {\displaystyle \rho } should be chosen so that the bound is tighest. Thus, the error exponent can be defined as
The source coding theorem states that for any ε > 0 {\displaystyle \varepsilon >0} and any discrete-time i.i.d. source such as X {\displaystyle X} and for any rate less than the entropy of the source, there is large enough n {\displaystyle n} and an encoder that takes n {\displaystyle n} i.i.d. repetition of the source, X 1 : n {\displaystyle X^{1:n}} , and maps it to n . ( H ( X ) + ε ) {\displaystyle n.(H(X)+\varepsilon )} binary bits such that the source symbols X 1 : n {\displaystyle X^{1:n}} are recoverable from the binary bits with probability at least 1 − ε {\displaystyle 1-\varepsilon } .
Let M = e n R {\displaystyle M=e^{nR}\,\!} be the total number of possible messages. Next map each of the possible source output sequences to one of the messages randomly using a uniform distribution and independently from everything else. When a source is generated the corresponding message M = m {\displaystyle M=m\,} is then transmitted to the destination. The message gets decoded to one of the possible source strings. In order to minimize the probability of error the decoder will decode to the source sequence X 1 n {\displaystyle X_{1}^{n}} that maximizes P ( X 1 n ∣ A m ) {\displaystyle P(X_{1}^{n}\mid A_{m})} , where A m {\displaystyle A_{m}\,} denotes the event that message m {\displaystyle m} was transmitted. This rule is equivalent to finding the source sequence X 1 n {\displaystyle X_{1}^{n}} among the set of source sequences that map to message m {\displaystyle m} that maximizes P ( X 1 n ) {\displaystyle P(X_{1}^{n})} . This reduction follows from the fact that the messages were assigned randomly and independently of everything else.
Thus, as an example of when an error occurs, supposed that the source sequence X 1 n ( 1 ) {\displaystyle X_{1}^{n}(1)} was mapped to message 1 {\displaystyle 1} as was the source sequence X 1 n ( 2 ) {\displaystyle X_{1}^{n}(2)} . If X 1 n ( 1 ) {\displaystyle X_{1}^{n}(1)\,} was generated at the source, but P ( X 1 n ( 2 ) ) > P ( X 1 n ( 1 ) ) {\displaystyle P(X_{1}^{n}(2))>P(X_{1}^{n}(1))} then an error occurs.
Let S i {\displaystyle S_{i}\,} denote the event that the source sequence X 1 n ( i ) {\displaystyle X_{1}^{n}(i)} was generated at the source, so that P ( S i ) = P ( X 1 n ( i ) ) . {\displaystyle P(S_{i})=P(X_{1}^{n}(i))\,.} Then the probability of error can be broken down as P ( E ) = ∑ i P ( E ∣ S i ) P ( S i ) . {\displaystyle P(E)=\sum _{i}P(E\mid S_{i})P(S_{i})\,.} Thus, attention can be focused on finding an upper bound to the P ( E ∣ S i ) {\displaystyle P(E\mid S_{i})\,} .
Let A i ′ {\displaystyle A_{i'}\,} denote the event that the source sequence X 1 n ( i ′ ) {\displaystyle X_{1}^{n}(i')} was mapped to the same message as the source sequence X 1 n ( i ) {\displaystyle X_{1}^{n}(i)} and that P ( X 1 n ( i ′ ) ) ≥ P ( X 1 n ( i ) ) {\displaystyle P(X_{1}^{n}(i'))\geq P(X_{1}^{n}(i))} . Thus, letting X i , i ′ {\displaystyle X_{i,i'}\,} denote the event that the two source sequences i {\displaystyle i\,} and i ′ {\displaystyle i'\,} map to the same message, we have that
and using the fact that P ( X i , i ′ ) = 1 M {\displaystyle P(X_{i,i'})={\frac {1}{M}}\,} and is independent of everything else have that
A simple upper bound for the term on the left can be established as
for some arbitrary real number s > 0 . {\displaystyle s>0\,.} This upper bound can be verified by noting that P ( P ( X 1 n ( i ′ ) ) > P ( X 1 n ( i ) ) ) {\displaystyle P(P(X_{1}^{n}(i'))>P(X_{1}^{n}(i)))\,} either equals 1 {\displaystyle 1\,} or 0 {\displaystyle 0\,} because the probabilities of a given input sequence are completely deterministic. Thus, if P ( X 1 n ( i ′ ) ) ≥ P ( X 1 n ( i ) ) , {\displaystyle P(X_{1}^{n}(i'))\geq P(X_{1}^{n}(i))\,,} then P ( X 1 n ( i ′ ) ) P ( X 1 n ( i ) ) ≥ 1 {\displaystyle {\frac {P(X_{1}^{n}(i'))}{P(X_{1}^{n}(i))}}\geq 1\,} so that the inequality holds in that case. The inequality holds in the other case as well because
for all possible source strings. Thus, combining everything and introducing some ρ ∈ [ 0 , 1 ] {\displaystyle \rho \in [0,1]\,} , have that
Where the inequalities follow from a variation on the Union Bound. Finally applying this upper bound to the summation for P ( E ) {\displaystyle P(E)\,} have that:
Where the sum can now be taken over all i ′ {\displaystyle i'\,} because that will only increase the bound. Ultimately yielding that
Now for simplicity let 1 − s ρ = s {\displaystyle 1-s\rho =s\,} so that s = 1 1 + ρ . {\displaystyle s={\frac {1}{1+\rho }}\,.} Substituting this new value of s {\displaystyle s\,} into the above bound on the probability of error and using the fact that i ′ {\displaystyle i'\,} is just a dummy variable in the sum gives the following as an upper bound on the probability of error:
The term in the exponent should be maximized over ρ {\displaystyle \rho \,} in order to achieve the highest upper bound on the probability of error.
Letting E 0 ( ρ ) = ln ( ∑ x i P ( x i ) 1 1 + ρ ) ( 1 + ρ ) , {\displaystyle E_{0}(\rho )=\ln \left(\sum _{x_{i}}P(x_{i})^{\frac {1}{1+\rho }}\right)(1+\rho )\,,} see that the error exponent for the source coding case is:
R. Gallager, Information Theory and Reliable Communication, Wiley 1968