Menu
Home Explore People Places Arts History Plants & Animals Science Life & Culture Technology
On this page
Z-channel (information theory)
Communications channel used in coding theory and information theory

In coding theory and information theory, a Z-channel or binary asymmetric channel is a communications channel used to model the behaviour of some data storage systems.

Related Image Collections Add Image
We don't have any YouTube videos related to Z-channel (information theory) yet.
We don't have any PDF documents related to Z-channel (information theory) yet.
We don't have any Books related to Z-channel (information theory) yet.
We don't have any archived web articles related to Z-channel (information theory) yet.

Definition

A Z-channel is a channel with binary input and binary output, where each 0 bit is transmitted correctly, but each 1 bit has probability p of being transmitted incorrectly as a 0, and probability 1–p of being transmitted correctly as a 1. In other words, if X and Y are the random variables describing the probability distributions of the input and the output of the channel, respectively, then the crossovers of the channel are characterized by the conditional probabilities:1

Pr ⁡ [ Y = 0 | X = 0 ] = 1 Pr ⁡ [ Y = 0 | X = 1 ] = p Pr ⁡ [ Y = 1 | X = 0 ] = 0 Pr ⁡ [ Y = 1 | X = 1 ] = 1 − p {\displaystyle {\begin{aligned}\operatorname {Pr} [Y=0|X=0]&=1\\\operatorname {Pr} [Y=0|X=1]&=p\\\operatorname {Pr} [Y=1|X=0]&=0\\\operatorname {Pr} [Y=1|X=1]&=1-p\end{aligned}}}

Capacity

The channel capacity c a p ( Z ) {\displaystyle {\mathsf {cap}}(\mathbb {Z} )} of the Z-channel Z {\displaystyle \mathbb {Z} } with the crossover 1 → 0 probability p, when the input random variable X is distributed according to the Bernoulli distribution with probability α {\displaystyle \alpha } for the occurrence of 0, is given by the following equation:

c a p ( Z ) = H ( 1 1 + 2 s ( p ) ) − s ( p ) 1 + 2 s ( p ) = log 2 ⁡ ( 1 + 2 − s ( p ) ) = log 2 ⁡ ( 1 + ( 1 − p ) p p / ( 1 − p ) ) {\displaystyle {\mathsf {cap}}(\mathbb {Z} )={\mathsf {H}}\left({\frac {1}{1+2^{{\mathsf {s}}(p)}}}\right)-{\frac {{\mathsf {s}}(p)}{1+2^{{\mathsf {s}}(p)}}}=\log _{2}(1{+}2^{-{\mathsf {s}}(p)})=\log _{2}\left(1+(1-p)p^{p/(1-p)}\right)}

where s ( p ) = H ( p ) 1 − p {\displaystyle {\mathsf {s}}(p)={\frac {{\mathsf {H}}(p)}{1-p}}} for the binary entropy function H ( ⋅ ) {\displaystyle {\mathsf {H}}(\cdot )} .

This capacity is obtained when the input variable X has Bernoulli distribution with probability α {\displaystyle \alpha } of having value 0 and 1 − α {\displaystyle 1-\alpha } of value 1, where:

α = 1 − 1 ( 1 − p ) ( 1 + 2 H ( p ) / ( 1 − p ) ) , {\displaystyle \alpha =1-{\frac {1}{(1-p)(1+2^{{\mathsf {H}}(p)/(1-p)})}},}

For small p, the capacity is approximated by

c a p ( Z ) ≈ 1 − 0.5 H ( p ) {\displaystyle {\mathsf {cap}}(\mathbb {Z} )\approx 1-0.5{\mathsf {H}}(p)}

as compared to the capacity 1 − H ( p ) {\displaystyle 1{-}{\mathsf {H}}(p)} of the binary symmetric channel with crossover probability p.

For any p > 0 {\displaystyle p>0} , α > 0.5 {\displaystyle \alpha >0.5} (i.e. more 0s should be transmitted than 1s) because transmitting a 1 introduces noise. As p → 1 {\displaystyle p\rightarrow 1} , the limiting value of α {\displaystyle \alpha } is 1 − 1 e {\displaystyle 1-{\frac {1}{e}}} .2

Bounds on the size of an asymmetric-error-correcting code

Define the following distance function d A ( x , y ) {\displaystyle {\mathsf {d}}_{A}(\mathbf {x} ,\mathbf {y} )} on the words x , y ∈ { 0 , 1 } n {\displaystyle \mathbf {x} ,\mathbf {y} \in \{0,1\}^{n}} of length n transmitted via a Z-channel

d A ( x , y ) = △ max { | { i ∣ x i = 0 , y i = 1 } | , | { i ∣ x i = 1 , y i = 0 } | } . {\displaystyle {\mathsf {d}}_{A}(\mathbf {x} ,\mathbf {y} ){\stackrel {\vartriangle }{=}}\max \left\{{\big |}\{i\mid x_{i}=0,y_{i}=1\}{\big |},{\big |}\{i\mid x_{i}=1,y_{i}=0\}{\big |}\right\}.}

Define the sphere V t ( x ) {\displaystyle V_{t}(\mathbf {x} )} of radius t around a word x ∈ { 0 , 1 } n {\displaystyle \mathbf {x} \in \{0,1\}^{n}} of length n as the set of all the words at distance t or less from x {\displaystyle \mathbf {x} } , in other words,

V t ( x ) = { y ∈ { 0 , 1 } n ∣ d A ( x , y ) ≤ t } . {\displaystyle V_{t}(\mathbf {x} )=\{\mathbf {y} \in \{0,1\}^{n}\mid {\mathsf {d}}_{A}(\mathbf {x} ,\mathbf {y} )\leq t\}.}

A code C {\displaystyle {\mathcal {C}}} of length n is said to be t-asymmetric-error-correcting if for any two codewords c ≠ c ′ ∈ { 0 , 1 } n {\displaystyle \mathbf {c} \neq \mathbf {c} '\in \{0,1\}^{n}} , one has V t ( c ) ∩ V t ( c ′ ) = ∅ {\displaystyle V_{t}(\mathbf {c} )\cap V_{t}(\mathbf {c} ')=\emptyset } . Denote by M ( n , t ) {\displaystyle M(n,t)} the maximum number of codewords in a t-asymmetric-error-correcting code of length n.

The Varshamov bound. For n≥1 and t≥1,

M ( n , t ) ≤ 2 n + 1 ∑ j = 0 t ( ( ⌊ n / 2 ⌋ j ) + ( ⌈ n / 2 ⌉ j ) ) . {\displaystyle M(n,t)\leq {\frac {2^{n+1}}{\sum _{j=0}^{t}{\left({\binom {\lfloor n/2\rfloor }{j}}+{\binom {\lceil n/2\rceil }{j}}\right)}}}.}

The constant-weight code bound. For n > 2t ≥ 2, let the sequence B0, B1, ..., Bn-2t-1 be defined as

B 0 = 2 , B i = min 0 ≤ j < i { B j + A ( n + t + i − j − 1 , 2 t + 2 , t + i ) } {\displaystyle B_{0}=2,\quad B_{i}=\min _{0\leq j<i}\{B_{j}+A(n{+}t{+}i{-}j{-}1,2t{+}2,t{+}i)\}} for i > 0 {\displaystyle i>0} .

Then M ( n , t ) ≤ B n − 2 t − 1 . {\displaystyle M(n,t)\leq B_{n-2t-1}.}

Notes

  • MacKay, David J.C. (2003). Information Theory, Inference, and Learning Algorithms. Cambridge University Press. ISBN 0-521-64298-1.
  • Kløve, T. (1981). "Error correcting codes for the asymmetric channel". Technical Report 18–09–07–81. Norway: Department of Informatics, University of Bergen.
  • Verdú, S. (1997). "Channel Capacity (73.5)". The electrical engineering handbook (second ed.). IEEE Press and CRC Press. pp. 1671–1678.
  • Tallini, L.G.; Al-Bassam, S.; Bose, B. (2002). On the capacity and codes for the Z-channel. Proceedings of the IEEE International Symposium on Information Theory. Lausanne, Switzerland. p. 422.

References

  1. MacKay (2003), p. 148. - MacKay, David J.C. (2003). Information Theory, Inference, and Learning Algorithms. Cambridge University Press. ISBN 0-521-64298-1. http://www.inference.phy.cam.ac.uk/mackay/itila/book.html

  2. MacKay (2003), p. 159. - MacKay, David J.C. (2003). Information Theory, Inference, and Learning Algorithms. Cambridge University Press. ISBN 0-521-64298-1. http://www.inference.phy.cam.ac.uk/mackay/itila/book.html