Menu
Home Explore People Places Arts History Plants & Animals Science Life & Culture Technology
On this page
Binary symmetric channel

A binary symmetric channel (BSCp) is a fundamental model in coding theory and information theory where a transmitted bit may be flipped with a crossover probability p, simulating errors in communication over channels like telephone lines or disk drives. The noisy-channel coding theorem states that reliable transmission is possible at rates up to the channel capacity, defined as 1 minus the binary entropy function of p. Practical codes, such as Forney's code, achieve efficient communication near this theoretical limit, enabling robust data transfer despite noise in the channel.

We don't have any images related to Binary symmetric channel yet.
We don't have any YouTube videos related to Binary symmetric channel yet.
We don't have any PDF documents related to Binary symmetric channel yet.
We don't have any Books related to Binary symmetric channel yet.
We don't have any archived web articles related to Binary symmetric channel yet.

Definition

A binary symmetric channel with crossover probability p {\displaystyle p} , denoted by BSCp, is a channel with binary input and binary output and probability of error p {\displaystyle p} . That is, if X {\displaystyle X} is the transmitted random variable and Y {\displaystyle Y} the received variable, then the channel is characterized by the conditional probabilities:1

Pr ⁡ [ Y = 0 | X = 0 ] = 1 − p Pr ⁡ [ Y = 0 | X = 1 ] = p Pr ⁡ [ Y = 1 | X = 0 ] = p Pr ⁡ [ Y = 1 | X = 1 ] = 1 − p {\displaystyle {\begin{aligned}\operatorname {Pr} [Y=0|X=0]&=1-p\\\operatorname {Pr} [Y=0|X=1]&=p\\\operatorname {Pr} [Y=1|X=0]&=p\\\operatorname {Pr} [Y=1|X=1]&=1-p\end{aligned}}}

It is assumed that 0 ≤ p ≤ 1 / 2 {\displaystyle 0\leq p\leq 1/2} . If p > 1 / 2 {\displaystyle p>1/2} , then the receiver can swap the output (interpret 1 when it sees 0, and vice versa) and obtain an equivalent channel with crossover probability 1 − p ≤ 1 / 2 {\displaystyle 1-p\leq 1/2} .

Capacity

The channel capacity of the binary symmetric channel, in bits, is:2

  C BSC = 1 − H b ⁡ ( p ) , {\displaystyle \ C_{\text{BSC}}=1-\operatorname {H} _{\text{b}}(p),}

where H b ⁡ ( p ) {\displaystyle \operatorname {H} _{\text{b}}(p)} is the binary entropy function, defined by:3

H b ⁡ ( x ) = x log 2 ⁡ 1 x + ( 1 − x ) log 2 ⁡ 1 1 − x {\displaystyle \operatorname {H} _{\text{b}}(x)=x\log _{2}{\frac {1}{x}}+(1-x)\log _{2}{\frac {1}{1-x}}}

Noisy-channel coding theorem

Shannon's noisy-channel coding theorem gives a result about the rate of information that can be transmitted through a communication channel with arbitrarily low error. We study the particular case of BSC p {\displaystyle {\text{BSC}}_{p}} .

The noise e {\displaystyle e} that characterizes BSC p {\displaystyle {\text{BSC}}_{p}} is a random variable consisting of n independent random bits (n is defined below) where each random bit is a 1 {\displaystyle 1} with probability p {\displaystyle p} and a 0 {\displaystyle 0} with probability 1 − p {\displaystyle 1-p} . We indicate this by writing " e ∈ BSC p {\displaystyle e\in {\text{BSC}}_{p}} ".

Theorem—For all p < 1 2 , {\displaystyle p<{\tfrac {1}{2}},} all 0 < ϵ < 1 2 − p {\displaystyle 0<\epsilon <{\tfrac {1}{2}}-p} , all sufficiently large n {\displaystyle n} (depending on p {\displaystyle p} and ϵ {\displaystyle \epsilon } ), and all k ≤ ⌊ ( 1 − H ( p + ϵ ) ) n ⌋ {\displaystyle k\leq \lfloor (1-H(p+\epsilon ))n\rfloor } , there exists a pair of encoding and decoding functions E : { 0 , 1 } k → { 0 , 1 } n {\displaystyle E:\{0,1\}^{k}\to \{0,1\}^{n}} and D : { 0 , 1 } n → { 0 , 1 } k {\displaystyle D:\{0,1\}^{n}\to \{0,1\}^{k}} respectively, such that every message m ∈ { 0 , 1 } k {\displaystyle m\in \{0,1\}^{k}} has the following property:

Pr e ∈ BSC p [ D ( E ( m ) + e ) ≠ m ] ≤ 2 − δ n {\displaystyle \Pr _{e\in {\text{BSC}}_{p}}[D(E(m)+e)\neq m]\leq 2^{-{\delta }n}} .

What this theorem actually implies is, a message when picked from { 0 , 1 } k {\displaystyle \{0,1\}^{k}} , encoded with a random encoding function E {\displaystyle E} , and sent across a noisy BSC p {\displaystyle {\text{BSC}}_{p}} , there is a very high probability of recovering the original message by decoding, if k {\displaystyle k} or in effect the rate of the channel is bounded by the quantity stated in the theorem. The decoding error probability is exponentially small.

Proof

The theorem can be proved directly with a probabilistic method. Consider an encoding function E : { 0 , 1 } k → { 0 , 1 } n {\displaystyle E:\{0,1\}^{k}\to \{0,1\}^{n}} that is selected at random. This means that for each message m ∈ { 0 , 1 } k {\displaystyle m\in \{0,1\}^{k}} , the value E ( m ) ∈ { 0 , 1 } n {\displaystyle E(m)\in \{0,1\}^{n}} is selected at random (with equal probabilities). For a given encoding function E {\displaystyle E} , the decoding function D : { 0 , 1 } n → { 0 , 1 } k {\displaystyle D:\{0,1\}^{n}\to \{0,1\}^{k}} is specified as follows: given any received codeword y ∈ { 0 , 1 } n {\displaystyle y\in \{0,1\}^{n}} , we find the message m ∈ { 0 , 1 } k {\displaystyle m\in \{0,1\}^{k}} such that the Hamming distance Δ ( y , E ( m ) ) {\displaystyle \Delta (y,E(m))} is as small as possible (with ties broken arbitrarily). ( D {\displaystyle D} is called a maximum likelihood decoding function.)

The proof continues by showing that at least one such choice ( E , D ) {\displaystyle (E,D)} satisfies the conclusion of theorem, by integration over the probabilities. Suppose p {\displaystyle p} and ϵ {\displaystyle \epsilon } are fixed. First we show that, for a fixed m ∈ { 0 , 1 } k {\displaystyle m\in \{0,1\}^{k}} and E {\displaystyle E} chosen randomly, the probability of failure over BSC p {\displaystyle {\text{BSC}}_{p}} noise is exponentially small in n. At this point, the proof works for a fixed message m {\displaystyle m} . Next we extend this result to work for all messages m {\displaystyle m} . We achieve this by eliminating half of the codewords from the code with the argument that the proof for the decoding error probability holds for at least half of the codewords. The latter method is called expurgation. This gives the total process the name random coding with expurgation.

Converse of Shannon's capacity theorem

The converse of the capacity theorem essentially states that 1 − H ( p ) {\displaystyle 1-H(p)} is the best rate one can achieve over a binary symmetric channel. Formally the theorem states:

Theorem—If k {\displaystyle k} ≥ {\displaystyle \geq } ⌈ {\displaystyle \lceil } ( 1 − H ( p + ϵ ) n ) {\displaystyle (1-H(p+\epsilon )n)} ⌉ {\displaystyle \rceil } then the following is true for every encoding and decoding function E {\displaystyle E} : { 0 , 1 } k {\displaystyle \{0,1\}^{k}} → {\displaystyle \rightarrow } { 0 , 1 } n {\displaystyle \{0,1\}^{n}} and D {\displaystyle D} : { 0 , 1 } n {\displaystyle \{0,1\}^{n}} → {\displaystyle \rightarrow } { 0 , 1 } k {\displaystyle \{0,1\}^{k}} respectively: Pr e ∈ BSC p {\displaystyle \Pr _{e\in {\text{BSC}}_{p}}} [ D ( E ( m ) + e ) {\displaystyle D(E(m)+e)} ≠ {\displaystyle \neq } m ] {\displaystyle m]} ≥ {\displaystyle \geq } 1 2 {\displaystyle {\frac {1}{2}}} .

The intuition behind the proof is however showing the number of errors to grow rapidly as the rate grows beyond the channel capacity. The idea is the sender generates messages of dimension k {\displaystyle k} , while the channel BSC p {\displaystyle {\text{BSC}}_{p}} introduces transmission errors. When the capacity of the channel is H ( p ) {\displaystyle H(p)} , the number of errors is typically 2 H ( p + ϵ ) n {\displaystyle 2^{H(p+\epsilon )n}} for a code of block length n {\displaystyle n} . The maximum number of messages is 2 k {\displaystyle 2^{k}} . The output of the channel on the other hand has 2 n {\displaystyle 2^{n}} possible values. If there is any confusion between any two messages, it is likely that 2 k 2 H ( p + ϵ ) n ≥ 2 n {\displaystyle 2^{k}2^{H(p+\epsilon )n}\geq 2^{n}} . Hence we would have k ≥ ⌈ ( 1 − H ( p + ϵ ) n ) ⌉ {\displaystyle k\geq \lceil (1-H(p+\epsilon )n)\rceil } , a case we would like to avoid to keep the decoding error probability exponentially small.

Codes

Very recently, a lot of work has been done and is also being done to design explicit error-correcting codes to achieve the capacities of several standard communication channels. The motivation behind designing such codes is to relate the rate of the code with the fraction of errors which it can correct.

The approach behind the design of codes which meet the channel capacities of BSC {\displaystyle {\text{BSC}}} or the binary erasure channel BEC {\displaystyle {\text{BEC}}} have been to correct a lesser number of errors with a high probability, and to achieve the highest possible rate. Shannon's theorem gives us the best rate which could be achieved over a BSC p {\displaystyle {\text{BSC}}_{p}} , but it does not give us an idea of any explicit codes which achieve that rate. In fact such codes are typically constructed to correct only a small fraction of errors with a high probability, but achieve a very good rate. The first such code was due to George D. Forney in 1966. The code is a concatenated code by concatenating two different kinds of codes.

Forney's code

Forney constructed a concatenated code C ∗ = C out ∘ C in {\displaystyle C^{*}=C_{\text{out}}\circ C_{\text{in}}} to achieve the capacity of the noisy-channel coding theorem for BSC p {\displaystyle {\text{BSC}}_{p}} . In his code,

  • The outer code C out {\displaystyle C_{\text{out}}} is a code of block length N {\displaystyle N} and rate 1 − ϵ 2 {\displaystyle 1-{\frac {\epsilon }{2}}} over the field F 2 k {\displaystyle F_{2^{k}}} , and k = O ( log ⁡ N ) {\displaystyle k=O(\log N)} . Additionally, we have a decoding algorithm D out {\displaystyle D_{\text{out}}} for C out {\displaystyle C_{\text{out}}} which can correct up to γ {\displaystyle \gamma } fraction of worst case errors and runs in t out ( N ) {\displaystyle t_{\text{out}}(N)} time.
  • The inner code C in {\displaystyle C_{\text{in}}} is a code of block length n {\displaystyle n} , dimension k {\displaystyle k} , and a rate of 1 − H ( p ) − ϵ 2 {\displaystyle 1-H(p)-{\frac {\epsilon }{2}}} . Additionally, we have a decoding algorithm D in {\displaystyle D_{\text{in}}} for C in {\displaystyle C_{\text{in}}} with a decoding error probability of at most γ 2 {\displaystyle {\frac {\gamma }{2}}} over BSC p {\displaystyle {\text{BSC}}_{p}} and runs in t in ( N ) {\displaystyle t_{\text{in}}(N)} time.

For the outer code C out {\displaystyle C_{\text{out}}} , a Reed-Solomon code would have been the first code to have come in mind. However, we would see that the construction of such a code cannot be done in polynomial time. This is why a binary linear code is used for C out {\displaystyle C_{\text{out}}} .

For the inner code C in {\displaystyle C_{\text{in}}} we find a linear code by exhaustively searching from the linear code of block length n {\displaystyle n} and dimension k {\displaystyle k} , whose rate meets the capacity of BSC p {\displaystyle {\text{BSC}}_{p}} , by the noisy-channel coding theorem.

The rate R ( C ∗ ) = R ( C in ) × R ( C out ) = ( 1 − ϵ 2 ) ( 1 − H ( p ) − ϵ 2 ) ≥ 1 − H ( p ) − ϵ {\displaystyle R(C^{*})=R(C_{\text{in}})\times R(C_{\text{out}})=(1-{\frac {\epsilon }{2}})(1-H(p)-{\frac {\epsilon }{2}})\geq 1-H(p)-\epsilon } which almost meets the BSC p {\displaystyle {\text{BSC}}_{p}} capacity. We further note that the encoding and decoding of C ∗ {\displaystyle C^{*}} can be done in polynomial time with respect to N {\displaystyle N} . As a matter of fact, encoding C ∗ {\displaystyle C^{*}} takes time O ( N 2 ) + O ( N k 2 ) = O ( N 2 ) {\displaystyle O(N^{2})+O(Nk^{2})=O(N^{2})} . Further, the decoding algorithm described takes time N t in ( k ) + t out ( N ) = N O ( 1 ) {\displaystyle Nt_{\text{in}}(k)+t_{\text{out}}(N)=N^{O(1)}} as long as t out ( N ) = N O ( 1 ) {\displaystyle t_{\text{out}}(N)=N^{O(1)}} ; and t in ( k ) = 2 O ( k ) {\displaystyle t_{\text{in}}(k)=2^{O(k)}} .

Decoding error probability

A natural decoding algorithm for C ∗ {\displaystyle C^{*}} is to:

  • Assume y i ′ = D in ( y i ) , i ∈ ( 0 , N ) {\displaystyle y_{i}^{\prime }=D_{\text{in}}(y_{i}),\quad i\in (0,N)}
  • Execute D out {\displaystyle D_{\text{out}}} on y ′ = ( y 1 ′ … y N ′ ) {\displaystyle y^{\prime }=(y_{1}^{\prime }\ldots y_{N}^{\prime })}

Note that each block of code for C in {\displaystyle C_{\text{in}}} is considered a symbol for C out {\displaystyle C_{\text{out}}} . Now since the probability of error at any index i {\displaystyle i} for D in {\displaystyle D_{\text{in}}} is at most γ 2 {\displaystyle {\tfrac {\gamma }{2}}} and the errors in BSC p {\displaystyle {\text{BSC}}_{p}} are independent, the expected number of errors for D in {\displaystyle D_{\text{in}}} is at most γ N 2 {\displaystyle {\tfrac {\gamma N}{2}}} by linearity of expectation. Now applying Chernoff bound, we have bound error probability of more than γ N {\displaystyle \gamma N} errors occurring to be e − γ N 6 {\displaystyle e^{\frac {-\gamma N}{6}}} . Since the outer code C out {\displaystyle C_{\text{out}}} can correct at most γ N {\displaystyle \gamma N} errors, this is the decoding error probability of C ∗ {\displaystyle C^{*}} . This when expressed in asymptotic terms, gives us an error probability of 2 − Ω ( γ N ) {\displaystyle 2^{-\Omega (\gamma N)}} . Thus the achieved decoding error probability of C ∗ {\displaystyle C^{*}} is exponentially small as the noisy-channel coding theorem.

We have given a general technique to construct C ∗ {\displaystyle C^{*}} . For more detailed descriptions on C in {\displaystyle C_{\text{in}}} and C out {\displaystyle C_{\text{out}}} please read the following references. Recently a few other codes have also been constructed for achieving the capacities. LDPC codes have been considered for this purpose for their faster decoding time.4

Applications

The binary symmetric channel can model a disk drive used for memory storage: the channel input represents a bit being written to the disk and the output corresponds to the bit later being read. Error could arise from the magnetization flipping, background noise or the writing head making an error. Other objects which the binary symmetric channel can model include a telephone or radio communication line or cell division, from which the daughter cells contain DNA information from their parent cell.5

This channel is often used by theorists because it is one of the simplest noisy channels to analyze. Many problems in communication theory can be reduced to a BSC. Conversely, being able to transmit effectively over the BSC can give rise to solutions for more complicated channels.

See also

Notes

  • Cover, Thomas M.; Thomas, Joy A. (1991). Elements of Information Theory. Hoboken, New Jersey: Wiley. ISBN 978-0-471-24195-9.
  • G. David Forney. Concatenated Codes. MIT Press, Cambridge, MA, 1966.
  • Venkat Guruswamy's course on [1] Error-Correcting Codes: Constructions and Algorithms], Autumn 2006.
  • MacKay, David J.C. (2003). Information Theory, Inference, and Learning Algorithms. Cambridge University Press. ISBN 0-521-64298-1.
  • Atri Rudra's course on Error Correcting Codes: Combinatorics, Algorithms, and Applications (Fall 2007), Lectures 9, 10, 29, and 30.
  • Madhu Sudan's course on Algorithmic Introduction to Coding Theory (Fall 2001), Lecture 1 and 2.
  • A mathematical theory of communication C. E Shannon, ACM SIGMOBILE Mobile Computing and Communications Review.
  • Modern Coding Theory by Tom Richardson and Rudiger Urbanke., Cambridge University Press

References

  1. MacKay (2003), p. 4. - MacKay, David J.C. (2003). Information Theory, Inference, and Learning Algorithms. Cambridge University Press. ISBN 0-521-64298-1. http://www.inference.phy.cam.ac.uk/mackay/itila/book.html

  2. MacKay (2003), p. 15. - MacKay, David J.C. (2003). Information Theory, Inference, and Learning Algorithms. Cambridge University Press. ISBN 0-521-64298-1. http://www.inference.phy.cam.ac.uk/mackay/itila/book.html

  3. MacKay (2003), p. 15. - MacKay, David J.C. (2003). Information Theory, Inference, and Learning Algorithms. Cambridge University Press. ISBN 0-521-64298-1. http://www.inference.phy.cam.ac.uk/mackay/itila/book.html

  4. Richardson and Urbanke

  5. MacKay (2003), p. 3–4. - MacKay, David J.C. (2003). Information Theory, Inference, and Learning Algorithms. Cambridge University Press. ISBN 0-521-64298-1. http://www.inference.phy.cam.ac.uk/mackay/itila/book.html