Avatar

While no one has yet built a general purpose Quantum Computer (QC) capable of breaking the public key cryptography in use on the Internet, that possibility is now considered a realistic threat to long-term security.  As research into the design of a QC has intensified (including public access to a small implementation), so has the need to develop standards for ‘postquantum secure’ encryption algorithms that would resist its cryptanalytic capabilities. Along with other cryptographers, we are looking at ways to engineer postquantum security into Internet protocols; here are some of our thoughts.

On the Internet, confidentiality and authenticity are provided by cryptographic protocols such as TLS, IPsec, and SSH, each of which consists of multiple cryptographic components. If a QC existed, then an attacker could attempt to use it to break those protocols with any of three different tactics: 1) by directly recovering the keys that encrypt the traffic, 2) by extracting the encryption keys from the key establishment mechanism, or 3) by forging authentication credentials, and then posing as a legitimate party. The first case is the easiest to defend against; while it is speculated that it might be possible for a QC to recover a 128 bit symmetric key, it is generally agreed that they would not be able to recover a 256-bit symmetric key (source: Arxiv.org and Wikipedia). Current standards and implementations mostly support 256-bit keys, so moving to that key size is easy.

However, it is more difficult to defend against the other tactics. When a TLS client talks to a server, for instance, public key cryptography is used to establish the traffic-encryption keys. This negotiation is designed so that an attacker cannot recover those keys; however, the methods we currently use today (RSA, DH, ECDH) are all known to be vulnerable to algorithms that would run on a QC. An attacker could store today’s encrypted traffic, including the negotiation, until a QC is available.   At that time, they can break the key establishment used in the negotiation, recover the traffic keys, and then decrypt the traffic.   This threat to long-term confidentiality is especially important for sensitive data, and for cryptography that will be in the field for a long time, such as hardware-dependent implementations and embedded software.

While the need to adopt a key exchange method that is secure against a QC is apparent, it is not clear what method is suitable.  There are a number of candidates for postquantum key exchanges:

  • The oldest such method is code-based cryptography, such as the McEliece cryptosystem. It is well trusted because it has survived years of cryptanalysis, but it suffers from the drawback that its public keys are huge; it would require that a megabyte of data be exchanged in each negotiation.
  • Lattice cryptosystems are newer (source: Wikipedia); they are believed to be secure against a QC, but their exact security is unknown.  There are a range of such systems; the ones with more compact public keys (including the NewHope system described below) are based on Lattices with more structure, and some people worry that this extra structure might make the problem easier.
  • There are Elliptic Curve Isogeny-based systems; however these systems are not nearly as well studied as one would like.

There is a risk with more recently designed key exchange methods: they might not even be secure against a classical, non-quantum computer. The history of public key cryptography shows that it often took fifteen years or so before all of the best attacks against a new crypto algorithm were discovered; early estimates of key sizes were wrong, and early implementations were insecure as a result. As we move towards postquantum security, we need to avoid the pitfall of selecting an algorithm and key size too soon.

This problem can be solved by using redundant key establishment, that is, by redesigning the negotiation so that two distinct mechanisms are used in parallel, in such a way that the negotiation is secure if either of the mechanisms is secure. This technique is the cryptographic equivalent to a RAID-1 mirrored disk. For instance, to establish a TLS session, we can perform both a lattice-based key exchange and a standard elliptic curve based one, with the TLS messages carrying the messages from both methods.  As elliptic curve cryptography has small key sizes, this redundancy does not add much data overhead.

The Internet Key Exchange (IKE)

Along with our colleague Panos Kampanakis, we proposed a redundant key establishment technique for IKE that uses symmetric cryptography. In this draft, a Postquantum Preshared Key (PPK) acts as a symmetric shared secret, and this value is included in all of the IKE key derivation processes, so that even if all of the public key cryptography were vulnerable to a QC, the keys established by IKE would still be secure.  The benefit of this approach is that we can have very high confidence that the symmetric cryptography will be postquantum secure.  On the down side, the distribution of shared secret keys is difficult; this technique is more applicable to Virtual Private Networks than to World Wide Web Security.   The goals (but not the detailed mechanisms) have been adopted by the IETF IPsec Maintenance and Extensions Working Group (IPsec ME WG) – you should participate in the working group discussions to help and encourage standards for postquantum security.

Experimenting with postquantum key establishment

Google has recently announced that they are starting an experiment with post-quantum cryptography for the Web; it uses a compact Lattice cryptosystem commonly known as “New Hope” along with redundant key establishment. This New Hope system was developed by a group of European researchers, and tries to strike a balance between usability and security.

Google’s experimental implementation is available only in the Canary variant of the Chrome web browser, which is intended for developers and early adopters, who understand that the browser might be ‘prone to breakage’. This makes it a great platform for experimenting with postquantum security.   Unlike a typical beta version, Canary is entirely independent of the stable version of Chrome; the two can be installed side by side. Another positive aspect of this experiment is the fact that Google is explicitly not trying to create a de-facto standard; in fact, they ‘plan discontinue this experiment within two years, hopefully by replacing it with something better’.

Getting real world experience with postquantum key establishment is a great idea. Of course, it addresses only one piece of the problem; a full solution would also need to address the authentication piece, for instance by using postquantum secure digital signatures in X.509 certificates. A QC would be able to efficiently recover the private keys from the signature methods currently used on the Internet. This threat might not look to be as critical as the corresponding attacks on the key establishment, because the attacker would not be able to retrospectively exploit a signature-algorithm vulnerability to decrypt previously recorded data. However, the problem with signatures is that to have post-quantum secure certificates, the entire certificate infrastructure needs to be updated, including the certificate authorities and root certificates (if those can be broken, then the attacker could issue any certificate he wants). Because of the large amount of infrastructure that would need to be updated, there is important work to be done in developing techniques and standards for postquantum signatures as well. (Look for a post from Cisco on this topic in the near future.)

To follow or participate in the development of postquantum secure cryptography techniques and standards, you can attend the upcoming 4th ETSI Workshop on Quantum-Safe Cryptography. The IRTF Crypto Forum Research Group (CFRG) mail list is another place where these discussions are taking place, as well as the IPSec ME WG. Postquantum cryptography is important to long-term security, so we encourage you to be involved in its development and adoption.

 

 



Authors

David McGrew

Cisco Fellow

Security