Cloudflare TV

TLS and Post-Quantum

Presented by Chris Wood, Douglas Stebila
Originally aired on 

The Transport Layer Security protocol (TLS) is one of the most vital protocols used nowadays to provide security for our connections. Unfortunately, it is threatened by quantum computers. In this segment, we will explore what we need to make it quantum-secure with the experts Douglas Stebila and Christopher Wood.

For more, don't miss:

English
Research

Transcript (Beta)

Hey, folks. My name is Chris Wood. I'm a research lead here at Cloudflare on the research team.

And I'm joined here by Douglas Stebila, a professor of cryptography and many other security related things at the University of Waterloo.

Douglas, thanks for hopping on to talk to me about interesting things today.

Yes. Hi, Chris. Great to be here. So Douglas and I both do a lot of work in the TLS space, the transport layer security protocol, and on the heels of the post-quantum blog post series that went out last week.

We're going to be talking about the relationship between post -quantum efforts in the IETF and in the industry and their impact on TLS as we move forward.

So with that said, it might be useful to do some level setting, make sure everyone's sort of on the same page with respect to what TLS is, what its security properties are and what the post -quantum, you know, looming threat brings to the table.

So first, Douglas, you know, it might be useful if you could summarize the sort of fundamental security properties of TLS as a, you know, a secure transport protocol as it's used today.

Right.

Yeah. So the transport layer security protocol, TLS, that we use in our web browsers and all other kinds of communication systems on the Internet is designed to provide a few different security services to applications.

So some of them around confidentiality and some of them are around authentication.

So when we set up a TLS connection, we want to be able to get confidentiality for the transmitted information secrecy.

And usually we actually want an extended property of that called forward secrecy, where if one party's long term credentials are later compromised, it shouldn't allow an adversary to decrypt an earlier communication session.

And we also want some authentication properties. We want that the parties agree on the parameters of their connection so they know that they're talking the same protocol and cryptographic algorithms.

And if parties are being authenticated to each other, then they also want that assurance.

So we usually have server to client authentication on the Web.

Sometimes we have client to server authentication as well.

And then we also want to ensure that the connection is live.

So there's protection against replay attacks. Excellent.

Yeah. And TLS 1.3, the latest version of the protocol, really improved all of these security properties that TLS happens to have.

And for those that are interested, the RSC 8446 on TLS 1.3 goes into detail what the security properties are, as well as points to some relevant research, sort of looking at these properties in more detail.

So let's start. Oh, go ahead. Sorry. And the way kind of TLS accomplishes this is that the protocol is actually broken up into two main chunks.

There's the handshake that establishes the connection. And there's a lot of cryptography going on in the handshake to set up all the parameters.

And then there's the connection or maybe what's called the record layer, which is what the rest of the connection is used for.

Application data is transmitted over that record layer once everything has been set up in the handshake.

Yeah, so that's a great point.

And as you say, most of the interesting cryptography happens at the initial handshake step.

So let's drill into that a little bit. And in particular, I want to talk about the confidentiality and authentication properties that fall out of the handshake step.

If we assume that there is a quantum capable attacker, how might these, you know, these two properties be at risk, you know, either now or, you know, at some point in the future when the when the actual attacker is capable of doing quantum things?

Right. So the main thing we're worried about from quantum capable attackers is their ability to break public key algorithms.

And these are the algorithms that we're using in the handshake to establish confidentiality and to do peer authentication.

So for establishing the confidentiality in the handshake, what we do right now in TLS 1.3 and in earlier versions is we do a Diffie -Hellman key exchange.

And this is based on a mathematical problem, the difficulty of breaking Diffie-Hellman and computing discrete logarithms that a quantum computer using Shor's algorithm would be able to break.

And it's these shared secrets from which all the application encryption keys are derived for confidentiality.

So that's where the threat to confidentiality comes from from a quantum attacker.

They could break the key exchange, derive the encryption keys and then read encrypted messages.

And they could even do this retroactively. So if they're recording communications today and then in some number of years time, you know, a powerful attacker is able to build a quantum computer.

They can just look back at the information they recorded today and break the Diffie-Hellman key exchange and decrypt that information in however many years.

Yeah, so confidentiality, I think, is certainly the easier of the two properties to reason about in the presence of a quantum attacker.

In particular, as you point out, you know, there's this harvest and decrypt sort of attack that is relevant right now with respect to confidentiality, whereas there's not the same sort of looming threat for authentication properties for TLS.

And maybe you can explain why authentication is a bit of a different threat model for this particular type of attacker.

Right. Yes. So we also do authentication in the handshake using digital signatures.

If we're doing server to client authentication, this server will use its certificate public key to sign something and send it to the client.

And that proves that it really is the server talking to the client.

And it uses public key cryptography as well. And indeed, that is also threatened by a quantum capable attacker.

The difference here in terms of urgency is that we can't retroactively break the authentication of an authentication session.

So the TLS connection that's between you and I right now, if someone in 20 years time can break that digital signature, well, that doesn't help them to come back in time and impersonate you to me right now.

So that type of authentication in a secure channel setting, at least, is not retroactively at risk from future quantum attackers.

Yeah. Thank you. So certainly then, you know, the the focus needs to be on confidentiality in the near term.

And that is fortunately the simpler of the two sort of properties to achieve against a quantum capable attacker.

So let's dig into that a little bit. As you were saying earlier, there's these two parts of the TLS protocol.

There's the handshake step that establishes these, you know, these shared secrets, these encryption keys that everyone uses to encrypt application data.

And then there's the record layer, which actually uses those keys to encrypt application data.

And the goal of confidentiality is to make sure that the application data can't be decrypted later on.

So there's many ways that a quantum capable attacker could go try go about trying to get the application data.

It could, for example, just focus on the record layer and try to, for example, you know, break the record layer encryption to discover the application data.

Do you think that's something we need to be concerned about right now or or not?

Well, so in the record layer, we're using an authenticated encryption scheme.

And this is built from symmetric key primitives, say AES, the advanced encryption standard and, you know, a SHA family hash function, something like that.

And these symmetric key primitives are not vulnerable to quantum computers to the same extent that public key primitives are.

So we don't really even if there was a quantum computer, we don't have a way of directly attacking these symmetric key primitives.

And so you wouldn't be able to, as far as we know, you wouldn't be able to break or extract the encryption keys or plain text solely for symmetrically encrypted data like in the record layer.

So the authenticated encryption itself isn't directly at risk.

But the keys that we use in the authenticated encryption in TLS come from the handshake layer.

And that's where we were using public key cryptography.

And so they are at risk from that indirect route.

Right. And so there's a body of work, you know, ongoing in the TLS working group right now in the IETF specifying or trying to specify what we call a hybrid key exchange in order to to make the attacker or make TLS less vulnerable to this type of harvest and decrypt attack later on.

Can you describe what the hybrid key exchange is and why it helps against this particular type of attacker?

Sure. So I mentioned that we are normally doing exchange with Diffie -Hellman, but we have been developing quantum resistant equivalents to Diffie -Hellman, key encapsulation mechanisms, we call that.

And probably most of our viewers know that there is a standardization effort going on by the United States government right now.

NIST is running a standardization project to develop post-quantum chems.

So once those are finalized, we would be able to use those in TLS and we could just completely replace the Diffie-Hellman key exchange.

But what a lot of groups are considering doing is taking a hybrid or composite approach.

We actually use two algorithms simultaneously to continue using our Diffie-Hellman or elliptic curve Diffie -Hellman key exchange that we've been using for a while now.

But we would also start using a post-quantum algorithm simultaneously.

And the idea would be with this combination that we have security as long as at least one of these algorithms is secure.

So even if a quantum computer can eventually break the Diffie-Hellman, we would still get security from the post -quantum key exchange.

Yeah. So the idea here is to when a client sends a client hello, for example, to the server and the TLS handshake, the client offers up two key shares, one for the classical Diffie -Hellman based key exchange protocol and then one for the post-quantum chem variant.

So this does have an impact on certain properties about TLS in particular, the size of the client hellos.

And there are different extensions that are coming down the road, in particular TLS encrypted client hello that changes the structure of the client hello in such a way that this increase in size could be a problem depending on how you lay out the bytes on the wire.

But that particular extension is doing things to accommodate these potentially quite large key shares that are going to be sent in a in a post -quantum or a hybrid key exchange world.

There's also other. Oh, go ahead. We've been really spoiled with Elliptic Curve Diffie-Hellman key exchange.

You know, we can do everything in just 32 bytes, whereas these post-quantum algorithms have much larger values.

The smallest ones are 100 to 200 to 300 bytes. And then we have stuff in the 700 to 1000 byte range.

So that's why we might have these larger handshakes, because we simply have larger public keys, larger exchanged values in our post-quantum algorithms.

Yeah. And we've done some experiments as well as others have done some experiments in the past to kind of see what the effect of these larger handshake messages are on the viability of TLS connections.

Are they able to complete successfully or are there, for example, middle boxes in the way that interfere with, you know, abnormally large client hellos or server hellos or whatnot or otherwise interfere with connections that have that stand out in some unusual way?

And the results look promising. You know, people are still trying to deploy hybrid key exchange and running into issues, but we're fixing them as we go, identifying and fixing them.

So hopefully, you know, at some point in the near future, we're at a position where the Internet is ready for wide deployment of these key exchange, these hybrid key exchange protocols.

So we don't have to worry and we can sleep at night that, you know, our application data set now will be safe.

I was going to say earlier that there's other applications of TLS 1.3, in particular, in the recently standardized QUIC transport protocol.

QUIC reuses the TLS handshake for its authenticated key exchange protocol.

And as a result, you know, any increase in size that comes from TLS obviously impacts QUIC.

But QUIC has made accommodations for these potentially quite large TLS handshake messages such that, you know, there's no concern of running up to UDP packet limitations.

So in general, you know, the protocols in which or the applications that use TLS and the other protocols that embed TLS have been trying to make accommodations for the sort of post-quantum future in which, you know, TLS might look different.

It might be bigger, might take longer to complete, might have different round trips.

And, you know, there's many ongoing efforts to sort of prepare these applications and protocols, but all that work is underway.

So you mentioned again this ongoing NIST standardization effort for hybrid key exchange.

And I referenced the IETF draft, which is trying to codify exactly how you would do hybrid key exchange in the context of TLS.

So folks who are interested in learning about the details of this technique, we can somehow provide details or a pointer to that particular draft for people to check it out.

And there is actually an upcoming IETF meeting in Vienna later this month in which we'll probably talk about this topic a bit more.

So it seems, you know, we have reasonably thought through the answer to the confidentiality story here.

We know how to augment the key exchange such that it's the best of both worlds, so to speak.

You have this, you know, this classical key exchange, you have this post-quantum key exchange, and provided that you can't break both of them, then you're fine.

The authentication story is more challenging, and before I guess kind of getting into the particular challenges, I was wondering if you could educate us on sort of the ways in which, you know, handshakes are authenticated today.

You mentioned earlier that there are signatures involved, you know, signatures over what using, you know, what data and are there other things other than signatures?

Yeah, right. So I think many, many people have had the experience of setting up a web server and having to get a certificate containing a signature public key, and that's the predominant way that most websites authenticate themselves to users.

So in the TLS handshake, they'll send the certificate containing their public key, maybe an RSA public key or an elliptic curve digital signature public key.

And then at some point in the handshake, the server will sign a transcript of everything that's happened so far in the communication session.

And that allows the client to verify who it's talking to and that they're talking about the same thing.

So authentication and agreement. And if the client has a certificate, then they can do that in the reverse direction as well to mutually authenticate.

And that's all based on public key cryptography. There is a second mode of authentication that's used in some types of TLS connections based on pre -shared symmetric keys.

So, for example, if we had some Internet of Things devices, we might program some shared secrets into those devices before we deploy them to the field.

And they don't have to use signatures or certificates, then they can use symmetric message authentication codes to authenticate each other based on that pre-established shared secret.

Right. And there is actually an IETF draft as well that allows both authentication mechanisms to be used in combination.

So you could, for example, if you if you didn't know how to migrate your classical signature scheme to something that's post-conductor friendly, which is something we'll talk about in just a moment, you could right now start mixing in a PSK into your deployment of TLS in order to hedge against a particular quantum attacker, even though one does not exist.

But if you were very paranoid, you could do so. So let's go back then to the signature step.

There's, as you were describing, there's a signature that's computed over a transcript of the protocol to authenticate either server to client or client to server.

And there's generally, you know, at least at least one signature that's involved in the process of establishing a connection.

And it's this particular handshake signature. But there's also other signatures that are included.

You were describing how the public key for an entity is carried in a certificate.

And that certificate is ultimately the thing that attests to the particular identity and the public key of the of the party that you're communicating with in order to trust that particular claim.

You have to verify a signature over that certificate and potentially others, depending how long your certificate chain is.

So I'm wondering if you can talk about the differences between sort of the online handshake signature and this offline certificate chain signature and how we might choose to think about solving or upgrading them to a post-quantum future based on their differences in terms of being online versus offline.

Right. So because we're using these signatures, multiple signatures in the certificate chain and in the online connection establishment in different ways and in different distribution patterns where like in terms of information that may have been pre-distributed, we can sometimes do some tradeoffs for them.

So there is this chain of trust back to a root certificate. But that root certificate is never transmitted.

You have those root certificate authorities certificates pre -installed in your browser or in your operating system.

And so that means we never transmit their public keys in an online setting.

We only transmit their signatures. And in the post-quantum world, the algorithms that we have available to us do have some tradeoffs.

So we have some schemes, for example, that have fairly big public keys, but really small signatures.

And so we might choose to trade off the selection of algorithms so that for the root algorithms, we have big public keys, but they're all pre-installed and then we get small signatures.

Whereas for the online portions, especially the end entities certificate that we might use in the web server, then we're always transmitting at least one public key and one signature.

So we might want to go with something that is more balanced in terms of sizes.

Are there other options for the online portion beyond signature algorithms for authenticating the peer, either server to client or client to server?

Yeah, so somewhat counterintuitively, we can use public encryption or key encapsulation mechanisms to actually authenticate someone.

So if you and I are talking right now, one way for you to prove your identity to me is to sign something.

But another way for you to prove your identity is for you and I to do a key exchange and then you to prove that you know the shared secret.

And that's exactly what we had for Diffie-Hellman as well, a way of computing a shared secret.

So there's been a proposal that I've been involved with and others to have server authentication switch over to using KEMS, key encapsulation mechanisms, the post-quantum version of Diffie-Hellman to authenticate the server, because that might provide a better trade off in terms of communication sizes.

Yeah, this particular proposal, which originally it was called KEM-TLS as it was presented or published at CCS, if I remember correctly, has sort of transformed into this more focused concept, which we're calling AuthKEM or authenticating with a KEM in the context of TLS.

And there are experiments underway to compare the viability of AuthKEM, you know, of variant of TLS in which you authenticate with KEMS to variants of TLS in which you authenticate with post -quantum signatures to see what the trade offs are from an end user perspective, a performance perspective and whatnot.

So certainly there's interesting work to be done in that direction to kind of figure out what the actual authentication machinery should be for the online portion of TLS.

Going back to the offline certificate authentication story, you identified a number of trade offs that one might want to consider, depending on what public keys are available in the root store for the client and server.

I was wondering if you could speak to potentially the challenges that are involved in this offline, you know, post -quantum upgrade, so to speak, like specifically if we were to take the web PKI as it exists today, which is heavily focused on digital signatures and certificates, how would we or what do we need to do to sort of move it to a PKI that is amenable to and supports post -quantum friendly signature schemes or even post-quantum KEMS, if that were to be the direction you would go for certificate chains?

Right. Yeah, I think upgrading the web to use post-quantum authentication is a much harder task than upgrading the web to use post-quantum confidentiality.

With key exchange, with confidentiality, you know, there are a relatively small number of vendors, you know, the browser manufacturers, the web server manufacturers, and you can kind of incrementally enable features.

And if a client offers a post-quantum key exchange and a server happens to support it, then they can just start using it.

But with authentication, you have to have certificates.

And suddenly that also involves certificate authorities and system administrators that have to install certificates.

So we would need not only all the software ready, the browsers and servers ready to use post-quantum certificates.

We would also have to have certificate authorities willing to issue post-quantum certificates.

We would need every system administrator to go out and get a post-quantum certificate and install it in their web server.

So there's a lot more coordination and a lot more work that would need to be done in order to switch over to using post-quantum certificates.

I guess the one saving grace there is what we talked about earlier, that it is a less urgent problem than post-quantum confidentiality because we don't have this retroactive break of authentication that we do for confidentiality.

Yeah. And another, I think, saving grace would be that we have, as a community, figured out to some extent how to automate the issuance flow.

Let's Encrypt has demonstrated very clearly that this is something that can be automated at a scale.

So perhaps less reliance on individual system admins going out and upgrading everything and more focus on getting the existing software that Let's Encrypt runs or other CAs running to incorporate support for these post-quantum certificates and algorithms and whatnot.

OK.

So we have less than five minutes left. And I think we've talked a lot about what are the challenges for TLS today?

What are the challenges from a perspective of confidentiality and authentication?

And what are the immediate challenges going forward in the next couple of years?

I wanted to finish by talking specifically about maybe some ongoing experimentation work that either you're working on or you're aware of others working on.

And perhaps also some measurement -related topics that folks could be tracking or start thinking about doing as we move towards a post-quantum universe of TLS.

So what's on your mind from the experimentation and measurement perspective these days?

Yeah. So we've seen a lot of experiments from industry and from academia over the last three or four years on post-quantum and hybrid on the mainstream web, I would say.

So your company, Cloudflare, and others, Google, and others have reported results from a variety of experiments they've been doing.

And there have been several papers as well.

And I'm involved in an open source project where we kind of try to make some tools available for people to be able to do some experiments themselves.

So in our open quantum safe project, we have some post-quantum enabled versions of OpenSSL, Chromium, Nginx, Apache, those types of things for people to test out how those types of things work in their context.

So I think the community is understanding better and better what the effects are there.

And as you mentioned earlier, are convinced that we would be able to make that migration.

I think there's much less work having been done on the embedded space.

So we have some micro benchmarks. We have some fast implementations for embedded devices.

But what those network connections look like can be very specialized for the particular domain.

It can have very significant constraints that matter very much to a particular audience, but a very different set of constraints matter to someone else.

And so I think we really, as a community, have much less idea on what the impact will be on these embedded spaces.

Yeah, I think that's right. And the ITF, as they embark upon efforts to make sure that TLS is ready for post-quantum world, it is including all of the feedback and perspectives of people from across the industry, including not only just the web browsers and the web servers, but also these embedded people.

And experimentation that goes into those particular domains to figure out whether or not AuthChem, for example, is more or less beneficial than using post-quantum signatures or viable than using post -quantum signatures.

It's certainly something that would be useful feedback for both the standardization process of these post-quantum efforts for TLS, but also probably the industry at large as we talk about post-quantum adaptation to other protocols that are not just TLS.

In the final minute we have remaining, all of this TLS and post-quantum, it's a pretty big or large amount of work that needs to be done.

And I'm wondering if you think this will be sort of the last time we'll have to do something of this scale.

I don't think so.

These new post-quantum algorithms we are understanding better, but cryptographic, cryptanalytic results always improve.

And so I think we should expect that we'll need to migrate algorithms again sometime in the future.

And so we should be designing for algorithmic agility now.

Yep, I hear there's something about rainbow.

And on that note, thank you, Douglas. And yeah. Thank you, Chris.

Thanks to the audience as well.

Thumbnail image for video "Cloudflare Research"

Cloudflare Research
Don't miss these great sessions from the Cloudflare Research team!
Watch more episodes