Most cryptographic primitives require randomness (for example, to generate their secret keys). Usually, one assumes that perfect randomness is available, but, conceivably, such primitives might be built under weaker, more realistic assumptions. This is known to be true for many authentication applications, when entropy alone is typically sufficient. In contrast, all known techniques for achieving privacy seem to fundamentally require (nearly) perfect randomness. We ask the question whether this is just a coincidence, or, perhaps, privacy inherently requires true randomness?
We completely resolve this question for the case of
(information-theoretic) private-key encryption, where parties wish to encrypt a b-bit value using a shared secret key sampled from some imperfect source of randomness S. Our main result shows that if such n-bit source S allows for a secure encryption of b > log n bits, then one can deterministically extract almost b nearly perfect random bits from S. Further, the restriction that b > log n is nearly tight: there exist sources S allowing one to perfectly encrypt (log n - loglog n) bits, but not to deterministically extract even a single slightly unbiased bit.
Hence, to a large extent, true randomness is inherent for encryption: either the key length must be exponential in the message length b, or one can deterministically extract nearly b almost unbiased random bits. In particular, the one-time pad scheme is essentially "universal".
Joint work with Carl Bosley. Paper can be found at http://eprint.iacr.org/2006/283
Audio (MP3 File, Podcast Ready)