Menu
Home Explore People Places Arts History Plants & Animals Science Life & Culture Technology
On this page
Leftover hash lemma
Lemma in cryptography

The leftover hash lemma is a lemma in cryptography first stated by Russell Impagliazzo, Leonid Levin, and Michael Luby.

Given a secret key X that has n uniform random bits, of which an adversary was able to learn the values of some t < n bits of that key, the leftover hash lemma states that it is possible to produce a key of about nt bits, over which the adversary has almost no knowledge, without knowing which t are known to the adversary. Since the adversary knows all but nt bits, this is almost optimal.

More precisely, the leftover hash lemma states that it is possible to extract a length asymptotic to H ∞ ( X ) {\displaystyle H_{\infty }(X)} (the min-entropy of X) bits from a random variable X) that are almost uniformly distributed. In other words, an adversary who has some partial knowledge about X, will have almost no knowledge about the extracted value. This is also known as privacy amplification (see privacy amplification section in the article Quantum key distribution).

Randomness extractors achieve the same result, but use (normally) less randomness.

Let X be a random variable over X {\displaystyle {\mathcal {X}}} and let m > 0 {\displaystyle m>0} . Let h : S × X → { 0 , 1 } m {\textstyle h\colon {\mathcal {S}}\times {\mathcal {X}}\rightarrow \{0,\,1\}^{m}} be a 2-universal hash function. If

m ≤ H ∞ ( X ) − 2 log ⁡ ( 1 ε ) {\textstyle m\leq H_{\infty }(X)-2\log \left({\frac {1}{\varepsilon }}\right)}

then for S uniform over S {\displaystyle {\mathcal {S}}} and independent of X, we have:

δ [ ( h ( S , X ) , S ) , ( U , S ) ] ≤ ε . {\textstyle \delta \left[(h(S,X),S),(U,S)\right]\leq \varepsilon .}

where U is uniform over { 0 , 1 } m {\displaystyle \{0,1\}^{m}} and independent of S.

H ∞ ( X ) = − log ⁡ max x Pr [ X = x ] {\textstyle H_{\infty }(X)=-\log \max _{x}\Pr[X=x]} is the min-entropy of X, which measures the amount of randomness X has. The min-entropy is always less than or equal to the Shannon entropy. Note that max x Pr [ X = x ] {\textstyle \max _{x}\Pr[X=x]} is the probability of correctly guessing X. (The best guess is to guess the most probable value.) Therefore, the min-entropy measures how difficult it is to guess X.

0 ≤ δ ( X , Y ) = 1 2 ∑ v | Pr [ X = v ] − Pr [ Y = v ] | ≤ 1 {\textstyle 0\leq \delta (X,Y)={\frac {1}{2}}\sum _{v}\left|\Pr[X=v]-\Pr[Y=v]\right|\leq 1} is a statistical distance between X and Y.

We don't have any images related to Leftover hash lemma yet.
We don't have any YouTube videos related to Leftover hash lemma yet.
We don't have any PDF documents related to Leftover hash lemma yet.
We don't have any Books related to Leftover hash lemma yet.
We don't have any archived web articles related to Leftover hash lemma yet.

See also

References

  1. Impagliazzo, Russell; Levin, Leonid A.; Luby, Michael (1989), "Pseudo-random Generation from one-way functions", in Johnson, David S. (ed.), Proceedings of the 21st Annual ACM Symposium on Theory of Computing, May 14-17, 1989, Seattle, Washington, USA, {ACM}, pp. 12–24, doi:10.1145/73007.73009, S2CID 18587852 /wiki/Russell_Impagliazzo

  2. Rubinfeld, Ronnit; Drucker, Andy (April 30, 2008), "Lecture 22: The Leftover Hash Lemma and Explicit Extractions" (PDF), Lecture notes for MIT course 6.842, Randomness and Computation, MIT, retrieved 2019-02-19 /wiki/Ronitt_Rubinfeld