Quantum Cryptography and Privacy Amplification

By Sharon Goldwater


Contents:


Introduction

Traditional cryptosystems fall into three categories: public-key systems, private-key systems where the key is shorter than the message, and one-time pad systems. If our friends Alice and Bob wish to send encrypted messages to each other, they face a dilemma. They like the convenience of public-key cryptosystems, but they aren't happy that these systems are breakable given enough computing time. They might not mind meeting to exchange a short secret key, but they are afraid that someone might be able to break their system using differential cryptanalysis. And if they use a one-time pad, they will have to meet, perhaps over and over, to exchange very long keys. Even then they still can't be sure that on the way to the meeting, a third party hasn't somehow managed to sneak a look at the key.

Unlike traditional cryptosystems, quantum cryptography offers the promise of unconditional security without face-to-face exchanges. Rather than relying on problems believed to be computationally "difficult," quantum cryptography uses basic physical laws to provide provable unconditional security. And unlike the key to a one-time pad, it is impossible for anyone to eavesdrop on a quantum key exchange and copy the key without being detected.

The foundation of quantum cryptography lies in the Heisenberg uncertainty principle, which states that certain pairs of physical properties are related in such a way that measuring one property prevents the observer from simultaneously knowing the value of the other. In particular, when measuring the polarization of a photon, the choice of what direction to measure affects all subsequent measurements. For instance, suppose you measure the polarization of a photon using a vertical filter. Classically, you would assume that if the photon passes through, it is vertically polarized, and therefore if you placed in front of the photon another filter with some angle t to the vertical, the photon would not pass through. However, quantum mechanics states that in fact, there is a certain probability that the photon will pass through the second filter as well, and this probability depends on the angle t. As t increases, the probability of the photon passing through the second filter decreases until it reaches 0 at t = 90o (i.e. the second filter is horizontal). When t = 45o, the chance of the photon passing through the second filter is precisely 1/2.

In measuring polarization of photons, we refer to a pair of orthogonal polarization states, such as horizontal/vertical, as a basis. A pair of bases is said to be conjugate if the measurement of the polarization in the first basis completely randomizes the measurement in the second basis [1], as in the example above with t = 45o. Note that if someone else gives the photon an initial polarization (either horizontal or vertical, but you don't know which) and you use a filter in the 45o /135o basis to measure the photon, you cannot determine any information about the initial polarization of the photon.

Back to TOC


Quantum Key Distribution

The first published paper to describe a cryptographic protocol using these ideas was written in 1984 by Charles Bennett and Gilles Brassard. In it, Bennett and Brassard described an unconditionally secure quantum key distribution system. Following is a description of the BB84 system [4]. First, we equip Alice and Bob with two polarizers each, one in the 0o /90o (+) basis and one in the 45o /135o (X) basis 1. We assume a quantum channel between Alice and Bob over which Alice can send photons, and a public channel over which Alice and Bob can discuss results. We allow the eavesdropper Eve to have unlimited computing power and access to both these channels, though she cannot alter messages on the public channel (see below for discussion of this). Now, Alice begins to send photons to Bob, each one polarized at random in one of the four directions-- 0o, 45o, 90o, or 135o. As Bob receives each photon, he chooses one of his polarizers at random to measure it with. Since he does not know which direction Alice chose for her polarizer, his choice may or may not match hers. If it does match, Bob will measure the same polarization as Alice. But if it doesn't match, Bob's measurement will be completely random. For instance, if Alice sends a photon | and Bob measures with +, he will correctly receive |. But if he measures with X, he will see (with equal probability) either \ or /, neither of which is what Alice actually sent.

To eliminate the false measurements from the sequence, Alice and Bob begin a public discussion after the entire sequence of photons has been sent. Bob tells Alice which basis he used to measure each photon, and Alice tells him whether or not it was the correct one. Neither Alice nor Bob ever announces the actual measurements, only the bases in which they were made. They discard all data for which their polarizers didn't match, leaving (in theory) two perfectly matching strings. They can then convert these into bit strings by agreeing which photon directions should be 0 and which should be 1.

Here is an example of the protocol in use, using | = \ = 1 and - = / = 0:

Alice sends with: +X++X X++XX ++X
Alice sends to Bob: |/|-/ \|-\\ -|/
Bob measures with: +XX++ X+XX+ X+X
Bob's results: |//-| \|\\- \|/
Valid data: |/ -  \| \   |/
Translated to key: 10 0  11 1   10

So far, all we have shown is that Alice and Bob can arrive at a shared key without publicly announcing any of the bits. But suppose Eve tries to gain information about the key by intercepting the photons as they are transmitted from Alice to Bob, measuring their polarization, and then resending them. What happens now? Since Eve, like Bob, has no idea which basis Alice uses to transmit each photon, she too must choose bases at random for her measurements. If she chooses the correct basis, and then sends Bob a photon matching the one she measures, all is well. But suppose she chooses the wrong basis. She will then see a photon in one of the two directions she is measuring, and send it to Bob. If Bob's basis matches Alice's, (and thus is different from Eve's), he is equally likely to measure either direction for the photon. But if Eve had not interfered, he would have been guaranteed the same measurement as Alice! In fact, in this intercept/resend scenario, Eve will corrupt at least 25% of the bits for which Bob's and Alice's bases coincide [1]. So if Alice and Bob publicly compare and discard some of the bits in their key and find no discrepancies, they can conclude that Eve has learned little or nothing about the key. Alternatively, Alice and Bob can agree publicly on a random subset of their bits, and compare the parities. The parities will differ in 50% of the cases when the bits differ, so by doing 20 parity checks, Alice and Bob can reduce the probability of an eavesdropper to less than 1 in a million.

The above discussion assumes that Eve cannot corrupt the messages on the public channel, or else she could simply impersonate each party to the other. However, if Eve is in fact capable of more than just passive observance of the public channel, the above protocol will still work as a method of "key expansion" rather than key generation [1]. Alice and Bob must share a small secret key to begin with which they use to implement a secure authentication scheme. Thus they can ensure to arbitrarily high probability the validity of the public messages. Then, using the quantum key distribution protocol, Alice and Bob can generate a much larger number of key bits for future use.

Back to TOC


Reconciliation

In practice, there are several problems with this protocol. The first is that real photon detectors always have some noise, so even without eavesdropping, Alice and Bob's bits will differ. Secondly, current technology is not good enough to reliably generate single photons. Actual photon emitters can generate pulses of light with a given average number, m, of photons per pulse, but not necessarily exactly that number each time. Clearly, if m is greater than 1, then Eve will have a good chance of being able to split the pulses, observing one photon while letting the remainder continue undisturbed to Bob. If m is significantly less than 1, then the probability of an eavesdropper being able to split the pulse is approximately m2/2 [1]. Even in this case the eavesdropper will be able to learn a constant fraction of the key bits without being detected.

In 1992, Bennett, Bessette, Brassard, Salvail, and Smolin published a paper [1] describing how to deal with these difficulties. With this protocol, Alice and Bob first reconcile their data through public discussion, revealing to Eve no more information than she may have already discovered during the quantum transmission phase. Then, using so-called "privacy amplification," Alice and Bob distill from their partly private key a smaller but far more secure key.

The procedure described in [1] for Alice and Bob to reconcile their bits takes place over a public channel. Since Eve presumably listens to all public transmissions, Alice and Bob must reveal as little information as possible while still ensuring that they end up with identical keys. They can do this by agreeing upon a random permutation of the bits in their strings (to randomize the locations of errors), and then splitting the resulting string into blocks of size b. The constant b is chosen so that each block is unlikely to contain more than one error. In the BBBSS implementation, b was chosen by experiment rather than theory. Alice and Bob then compare the parity of each block. If they find a pair of blocks with mismatched parities, they continually bisect the block into smaller and smaller blocks, comparing parities each time, until the error is found. To ensure that Eve learns nothing from this process, Alice and Bob discard the last bit of each block whose parity they disclose.

After completing this process once, there will still be mismatches in those blocks which happened to contain an even number of errors. So Alice and Bob repeat the process several more times with increasing block sizes until they believe the total number of errors to be low. At this point, the above strategy becomes inefficient because Alice and Bob must discard a bit for each block they compare, and the probability of finding an error in each block is low. So Alice and Bob switch to a new strategy, which they again perform multiple times. Each time, they choose a random subset of the bit positions in their complete strings, and compare parities. The probability of disagreement if the subset strings are not identical is exactly 1/2. If a disagreement occurs, a bisective search for the error is performed, this time using random subsets rather than blocks. The last bit of each subset is discarded. Eventually, all the errors will have been removed, and Alice and Bob will go through enough parity checks without discovering any errors that they may assume their strings are identical.

Back to TOC


Privacy Amplification

At this point, Alice and Bob posses identical strings, but those strings are not completely private. Eve may have gained some information about them either by beamsplitting or through intercept/resend. Although this second strategy may cause some errors in Bob's string, if Eve uses it on only a small number of bits, the induced errors will be lost among the errors caused by noise in the detectors and other physical problems. During the reconciliation phase, Eve did not gain any information, since the last bit of each parity check set was discarded. However, some of her original information about specific bits may have been converted to information about parity bits. For instance, if she knew the value of a bit x in string y, and Alice and Bob revealed the parity of y and discarded x, Eve would then know the parity of the remaining bits of y. If we say that Eve knows a parity bit about a string if she knows the parity of a non-empty subset of that string, then if Eve started out knowing at most k physical bits of the key, she will know at most k parity bits of the key after reconciliation [1].

In any case, Alice and Bob share an n-bit string S, and we will suppose that Eve knows at most k deterministic (i.e. parity or physical) bits of S. Alice and Bob wish to compute an r-bit key K, where r < n, such that Eve's expected information about K is below some specified bound. To do so, they will choose a compression function g: {0,1}n -> {0,1}r and compute K = g(S). The question is, what kinds of functions are appropriate for this purpose? That is, which functions, when applied to S, will yield a K about which Eve knows almost nothing?

Definition: [2] A class G of functions A -> B is universal2 if for any distinct x1 and x2 in A, the probability that g(x1) = g(x2) is at most 1/|B| when g is chosen at random from G according to the uniform distribution.

An example of a universal class is the set of permutations of A onto itself, since for any g in the set, the probability that g(x1) = g(x2) is zero, which is less than 1/|A|. It is shown in [2] that if Eve knows k deterministic bits of S, and Alice and Bob choose their compression function g at random from a universal class of hash functions {0,1}n -> {0,1}r where r = n - k - s for some safety parameter 0 < s < n-k, then Eve's expected information about K = g (S) is less than or equal to 2-s/ln2 bits. One such hash function to generate K [1] is for Alice and Bob to compute an additional r random subset parities of S, this time keeping the results secret. The r results of the parities will be the final r-bit key.

Given this result, one might ask how Alice and Bob are to determine the value of k, i.e. how much information has been leaked to Eve. As a conservative estimate, they can simply assume that all transmission errors were caused by eavesdropping (although most likely some came from detection errors). Eavesdropping errors could come from either intercept/resend or beamsplitting. Alice and Bob can use the beam intensity m and the bit error rate to calculate the expected fraction of S that Eve has learned. If they are conservative in their assumptions and add several standard deviations to their results, they will have a safe upper bound on the number of bits leaked to Eve. (See [1] for more details.)

The above discussion assumes that Eve knows only deterministic bits, so another issue is whether it might be more useful to her to obtain probabilistic information about S instead. In other words, rather than measuring photons in the same bases as Alice and Bob, she could pick a basis halfway in between them. This will give her a result that matches Alice's with probability approximately 85%, regardless of which basis Alice uses [3]. She will not gain any information when Bob reveals his measurement choices, so with this strategy all of her information is probabilistic rather than deterministic. Conceivably, this probabilistic information could be more resistant to privacy amplification than deterministic information. However, it turns out that this is not the case [3], so if Eve wishes to optimize her expected information on the final key, she should use the same bases as Alice and Bob, obtaining only deterministic bits.

Back to TOC


Practicality

An appropriate question to ask at this point is whether it is possible to actually implement the above quantum key distribution system. The answer is a qualified yes. The authors of [1] actually built the apparatus to carry out the protocol they describe, and showed that the protocol worked. However, the quantum channel was only 32 cm long. The problem with implementing their system over long distances is that fiber optic cables ruin the polarization of the light, so the photons need to be sent through a vacuum in a straight tube. A 200 yard quantum channel was built in 1992 using interference rather than polarization, because interference patterns survive somewhat better in fiber optic cables [10]. In 1993, the interference technique was used to build a 10 km long quantum channel [8], [9]. Nevertheless, there is still one major problem with extending quantum channels to longer distances. Messages passed along commercial fiber optic cables weaken after some distance and must periodically go through "booster" stations to amplify the signals, but quantum signals cannot be amplified without ruining them in the same way that eavesdropping ruins them.

In the time since the first quantum key distribution protocol was proposed, however, other quantum cryptographic protocols have been developed which make more sense over short distances. In particular, quantum protocols for oblivious transfer [3] and bit commitment [6] have been proposed which use the same techniques of transmission, reconciliation, and privacy amplification. Oblivious transfer and bit commitment are used in zero-knowledge proofs and other situations with two cooperating but mutually distrustful parties. Such situations can arise even in face-to-face negotiations, where a tabletop quantum device could be useful.

Other researchers in quantum cryptography have explored different ways to implement the established protocols. One alternative implementation [10], [7] involves using so-called "entangled pairs," which are pairs of photons generated by certain particle reactions. Each pair contains two photons of opposite polarization. In this system, the pairs are emitted at some point between Alice and Bob, who each measure the photons with randomly selected bases. They then compare which bases they used, and keep as their key only those photons which they have measured with the same basis. If Eve tries to read one of the photons before it gets to Alice or Bob, but uses a different basis than they do, they will no longer always measure opposite polarization. Again, by comparing a few bits, they can determine if eavesdropping has occurred. The theoretical advantage of this system over the BB84 protocol is that with BB84, Alice and Bob must store the key in their computers (or some other non-quantum device) after they have agreed upon it, and Eve could possibly break in and look at the key without detection. But entangled pairs of photons could theoretically be stored indefinitely without ever being observed. Alice and Bob would measure them only when they actually needed to use the key. If Eve ever looked at them, Alice and Bob would know. Of course, current technology does not allow us to store photons for any reasonable length of time, so right now this approach is no better than BB84.

So although quantum cryptography is not very practical right now, it is still worthy of study for several reasons. Unlike public-key cryptosystems, its security is provable and will not be compromised no matter how fast computers get, or even if P = NP. Currently it works only over short distances, but there are situations in which even short-distance transmission is useful. Also, with sufficient technical improvements, it might be possible in the future to implement quantum cryptography over long distances, so that private-key systems would no longer require face-to-face meetings. And finally, the concept of privacy amplification by public discussion, which was originally developed for use in quantum cryptography, can be extended to any situation in which Eve has partial knowledge of a string shared by Alice and Bob. In general, the differences between quantum cryptography and other cryptographic techniques are enough to encourage researchers to explore ideas that otherwise might not have arisen.

Back to TOC


References

[1] Bennett, C. H., Bessette, F., Brassard, G., Salvail, L., and Smolin, J., "Experimental Quantum Cryptography", Journal of Cryptology, vol. 5, no.1, 1992, pp. 3-28.

[2] Bennett, C. H., Brassard, G., Cr=E9peau, C. and Maurer, U. M., "Generalized Privacy Amplification", IEEE Transactions on Information Theory, 1995.

[3] Bennett, C. H., Brassard, G., Cr=E9peau, C., and Skubiszewska, M.-H., "Practical Quantum Oblivious Transfer", Advances in Cryptology -- Proceedings of Crypto '91, Lecture Notes in Computer Science, Vol. 576, Springer-Verlag, Berlin, 1992, pp. 351-366.

[4] Bennett, C. H., Brassard, G., and Ekert, A. K., "Quantum Cryptography", Scientific American, October 1992, pp. 50-57.

[5] Brassard, G. "Bibliography of Quantum Cryptography" http://www.iro.umontreal.ca/~crepeau/Biblio-QC.html

[6] Brassard, G., Cr=E9peau, C., Jozsa, R., and Langlois, D., "A Quantum Bit Commitment Scheme Provably Unbreakable By Both Parties", Proceedings of the 34th IEEE Symposium on Foundations of Computer Science, November 1993, pp. 362-371.

[7] Ekert, A. K., "Quantum cryptography based on Bell's theorem", Physical Review Letters, vol. 67, no. 6, 5 August 1991, pp. 661 - 663, as quoted in [5].

[8] Townsend, P. D., Rarity, J. G. and Tapster, P. R., "Single photon interference in a 10 km long optical fibre interferometer", Electronics Letters, vol. 29, no. 7, April 1993, pp. 634 - 635, as quoted in [5].

[9] Townsend, P. D., Rarity, J. G. and Tapster, P. R., "Enhanced single photon fringe visibility in a 10 km-long prototype quantum cryptography channel", Electronics Letters, vol. 29, no. 14, 8 July 1993, pp. 1291 - 1293, as quoted in [5].

[10] Zimmer, C., "Perfect Gibberish", Discover, September 1992, pp. 92-99.


Sharon Goldwater 12-10-96