In Learning Parity with Noise (LPN), the samples may contain some error. Instead of samples (x, ƒ(x)), the algorithm is provided with (x, y), where for random boolean b ∈ { 0 , 1 } {\displaystyle b\in \{0,1\}}
y = { f ( x ) , if b 1 − f ( x ) , otherwise {\displaystyle y={\begin{cases}f(x),&{\text{if }}b\\1-f(x),&{\text{otherwise}}\end{cases}}}
The noisy version of the parity learning problem is conjectured to be hard1 and is widely used in cryptography. 2
Wasserman, Hal; Kalai, Adam; Blum, Avrim (2000-10-15). "Noise-Tolerant Learning, the Parity Problem, and the Statistical Query Model". arXiv:cs/0010022. /wiki/ArXiv_(identifier) ↩
Pietrzak, Krzysztof (2012). "Cryptography from Learning Parity with Noise" (PDF). SOFSEM 2012: Theory and Practice of Computer Science. Lecture Notes in Computer Science. Vol. 7147. pp. 99–114. doi:10.1007/978-3-642-27660-6_9. ISBN 978-3-642-27659-0. {{cite book}}: |journal= ignored (help) 978-3-642-27659-0 ↩