The Post-Quantum Process
Since a few years back, the National Institute of Standards and Technology (NIST) has been running a process to select post-quantum algorithms to standardize. In this segment, we will talk with cryptographer Tanja Lange about the origins of that process, quantum computers, where it is now, and what is coming for the future.
For more, don't miss:
- The Cloudflare Research Hub , where you'll find all the latest news and blog posts!
Hi everyone and welcome again to another Cloudflare TV segment. Today we'll be talking about post-quantum cryptography as part of our post-quantum series.
Today I'm really happy to be joined by Tanja Lange who is an amazing professor at Eindhoven University and an expert on post-quantum cryptography and cryptography and many things.
So welcome Tanja to this segment. Okay so let's just jump right in about the topic which is post-quantum cryptography and of course the first question that we always ask because people are interested to know what is this post-quantum cryptography thing.
So what is Tanja post -quantum cryptography and why it came to existence?
Okay so let me start by cryptography is something we need in order to ensure that our communication is both secure against people spying on it so people who are interested in the content and also is protected against people modifying the content or interjecting content to be well if I get an email from Sofía with a fantastic invitation to Cloudflare TV I want to make sure it's a real thing and not somebody being an imposter and me going like all excited well there's actually nothing behind it.
So it ensures authenticity and integrity. Those are the last two properties that I mentioned and the first one is confidentiality.
Now one problem with cryptography as we know and like it is that some of the things we're currently using will not stand up to quantum computers.
So once there is a sufficiently large and stable quantum computer where well there's still discussion about exactly how large and how stable it has to be but certainly we're still luckily some time away from that.
Then some of those systems which we're currently using for exactly achieving these properties will break.
So post-quantum cryptography is cryptography with an attack model where the attacker has a quantum computer.
That doesn't tell you anything about the defenders or like the normal users.
You can't expect people have a quantum computer in their basement.
I mean they will have their normal cell phones, they will have their normal laptops but the attacker will have a quantum computer and there are lots of scenarios to consider.
One is the record now decrypt later where say some entity like certain three-letter agencies with big ears are listening to everything you're saying and then oh you can stop them by using cryptography.
That's what we're experiencing right now.
They can't read everything they're sending.
Well if there's probably encrypted they can't but they can store our current communication and then once they have a quantum computer can actually decrypt what they have now.
So post -quantum cryptography is dealing with scenarios to find out okay how expensive is it to break our current cryptography.
So that's one direction quantum analysis.
Also finding systems that resist attacks by quantum computers and then of course the whole spectrum of analyzing their security under classical attacks, quantum attacks, under well how fast can you implement them, do they fit with your requirements and so on.
So it's basically the whole spectrum of cryptography but the attack model has changed.
Yes that sounds really awesome to help us quantum cryptography in the face of quantum computers.
But let's go maybe to talk about more in details about the threats of quantum computers.
So we hear also a lot about the Shor's algorithm and this Gruber's algorithm so I don't know if you want to talk a little bit more about what those two are.
Yeah so those are the first two big algorithm categories that we took note of in entropy.
So when I say well quantum computers will break it's of course not that the mechanical instrument a quantum computer can do anything it is algorithm running on the quantum computer that does it.
And so I have to go a little bit into detail what these systems are based on in order to explain how the attacks come in.
So when we currently communicate then there's something called a handshake so we're starting to talk but we haven't, well we personally have met in real life but our computers haven't met in real life, we don't have any shared secret that we can just use.
And so there's some magic thing called poly-key cryptography where we're using hardness of mathematical functions as kind of a one-way function.
So for instance it's easy to multiply two numbers but it's harder to split a number into its factors.
So that's the integer factorization problem and that is one of the big classes of algorithms or problems that our current systems are based on.
Big thing to look at is the RSA cryptosystem and unfortunately Shor is very powerful about this.
So Shor's algorithm is an algorithm to find the period of a function.
So well something that comes back to its same like the clock comes back after 12 hours or the year comes back after 365 days.
We know lots of periodicity and for those we actually know how long the period is.
And for factorization well the quantum computer can determine the period, that is something quantum computer unfortunately is very very good at.
And then Shor found a way of how to use this period finding to break the integer factorization problem.
There are another big class of algorithms based on the cryptosystems, based on the discrete logarithm problem and also those Shor showed how to turn the breaking of the system into period finding.
So Shor's algorithm finds periods and if your cryptosystem can be broken by something which can translate to period finding then the quantum computer will break it.
You also mentioned another class of algorithms, I mean class, I mean both of those are due to particular people.
So Peter Shor and Love Grover, those invented these algorithms but we now also use them in a bigger concept.
So Shor's algorithm has been extended to other classes of cryptosystem, anything that's based on discrete logarithms, anywhere.
So being elliptic curves, being hyper elliptic curves, also some computations in number fields.
And Grover's algorithm has also been extended to many applications and what it basically does is if you have an efficiently encoded function you can find a pre-image.
So assume you have something which is typically one and there's just one chance one of the many, many inputs were at zero.
Think of, well, you know what the output of a hash function is or of a password or something, a short thing.
Okay, well you take this function minus that thing that gives you a zero only for that one particular input and then Grover's algorithm gets you a speed up in finding this pre-image.
Both of these give you speed ups. The effect of Shor is a lot more devastating than the effect of Grover.
So with Shor's algorithm, computations we currently consider exponential time or at least super polynomial, so integer factorization at the moment we can do in sub-exponential time but nobody has anything better than super polynomial for the discrete log problem on elliptic curves, for instance, not even a sub-exponential algorithm is known.
And both of those get polynomial complexity, so a lot, lot faster, which means you can't outpace it.
So if you double the parameters that means you're making your work somewhat harder.
You're also making the attackers work harder by about the same.
There is no difference in the complexity. Now with Grover's algorithm, when I say it's not as devastating, it still remains exponential but you're changing the exponent.
So something which would take 2 to the 128 searches in principle, well it takes 2 to the 64, so half the number up there, computations, but those computations are more expensive.
So it is never clear whether Grover gives you much of a speed up.
I mean in theory it does if you count just how many calls to something you're doing, but these calls are a lot more expensive.
But still, I mean, when we're defending against quantum computers there's always this feeling of you better be on the safe side, so we're designing something where, okay, we want to make sure that a quantum computer would take 2 to the 128 at least, so gigantic number and 2 to the 128 some big function, not just bit operations.
Yes, that's very interesting and it starts to go into the next question which was about, as we know, the National Institute of Technology of the United States is currently trying to standardize post-quantum algorithms and setting the security parameters of them, what they should actually achieve.
So what is this process that NIST is currently trying to put together?
Yes, so NIST has some good history in running cryptographic computations, it also has some bad history in designing standards without public input, so there were some in the Snowden archive in particular where it showed that NIST either knowingly or, well, unknowingly had published some vector algorithms.
With the competitions there is more trust from the community, so people from all over the world have submitted.
Submission deadline was end of 2017, before that was already a two-year process where they said, well, 2015 we are interested in running competition, is there community interest, and there was a lot of community interest.
Then 2016 they promised that they will run the competition and we're asking for more information of what they should put in the code and so on, and then deadline was end of November 2017.
68 is the number that they said that they received, the public only ever saw 69 of those, sorry, 86 was what they said they received and the public only ever saw 69, and this was a big mix of things.
So there were some systems which have been known as, well, probably safe against quantum computers for many years, so really well studied algorithms from post-quantum cryptography, but there are also some which were, well, rather ad hoc designs.
NIST is saying they even broke some internally, but the designers didn't fully accept it, so okay, well, everything which wasn't, which was proper they put online, and then over the next, well, no, four years people have been analyzing those.
There were a bunch which got broke.
I mean, I freely admit that we had a lot of fun that year, so eventually they post the algorithm on the 21st of December.
I do remember that. I was still in the office when it came out, and one of our students, Lorenz Pani, came over saying, hey, there's a submission package online, and okay, so well, we spent little moments scrolling through, and we're like, okay, it's only 69, wasn't there promise of 86, and then, okay, we took a train home, and then by the time we get home, we already had some emails from Lorenz because he had broken first schemes, and he was asking for feedback on his notification letter, so because the trains didn't have proper Internet, the notification took three hours after publication, but the break was within the first half hour, I believe, so that is kind of one extreme.
You look at it, and it crumbles, and then there are other schemes which are essentially as strong as they were at submission time, and then there's a whole spectrum in between, so there's about half of them which lost security compared to what they were submitted as, but it's not necessarily visible because of the rather coarse -grained nature of the security levels, so you mentioned that NIST asked for different levels, and they only defined five different levels, and then, depending how the designers felt, there might have been over-designing a little bit, so for something which should be SC, so the definition of these levels is by comparison to some other algorithms, so there's an algorithm called AES, which is a symmetric system, and they said, okay, level one is equivalent to AES 128 on a quantum computer, or on a regular computer, and then, well, two things happened.
AES on a quantum computer got less secure, so in that sense, people had an easier time of meeting that threshold, but also I mentioned before that with, for instance, with Grover, you have that many calls to a function, and if you design your system to have about that many calls, and you account for the function costing one, well, you're still security level one because, well, your function doesn't cost one, and some systems which got weakened, they then started accounting for, like, cost of memory, or how expensive is one such function called, and so on, and so not too many design teams changed the security levels or the parameters, even though some did, but when I say about half of them lost security, I mean from what we would have assumed at the end of 2017, what the big complexity is, down to what it is now, and that has changed, and I mean, some got broken really badly, like one within the first half hour, and we had a few more things.
We gave a talk at the end of that year at CCC, and we already had something just short of 10, which was broken, so as I said, lots of fun over Christmas.
And also, just recently, one of the schemes who was proposed as a finalist recently got, of the third-round competition, recently got broken, so still a lot to do on security.
Another thing, maybe it's going too fast. I mean, it's going both too fast and too slow, but I said at the beginning, like there's the collect now, decrypt later, that is a sign where if you have anything that is sensitive, that is still relevant, the security of which is still relevant to you, say in 10 years, you should actually be using post-quantum cryptography now.
You shouldn't just use elliptic curves, you should use a combination of elliptic curves and one of these post-quantum algorithms.
But at the same time, yes, there's this feeling of it's going just too fast, because well, if just now, like a week ago, there's still a devastating, well, it's a break, which meant that the security level one, which is not super secure, but should be unreachable, well, the attacker managed to do those parameters on a weekend on a laptop.
That's not secure.
So of course, you can upgrade the parameters and so on. It's not that the idea per se is broken, but it showed that we don't really understand what attacks are possible there.
And that's a sign of the system not being mature enough or our understanding not being mature enough.
That's worrisome. Yeah.
And another thing that also came into the mind of many people is that we also don't know exactly what kind of other capabilities quantum algorithms will have or will eventually have.
So is there like a security that people took in consideration from the quantum side of things?
So I'm less worried about that question, actually.
So there is the Copenhagen Convention of what quantum computers can do.
I mean, there's like three different definitions of what a quantum computer can do.
And those three definitions coincide. So no matter which one you pick, whether you say, OK, a quantum computer, like the closest to physics would be a quantum computer.
It's a computer which can simulate everything happening in nature, like having those processes.
And then there's another one which is closer to, say, electrical engineering or computer science, where the definition is by a set of gates.
So a quantum computer is one where you can do these following computations.
And it's a smallish gate set. So it's similar to what you're doing on a normal computer.
You have your addition. You have the modification.
They are slightly different from how we do these now. One big change is that you have everything has to be reversible.
And then in addition to those gates, well, you have classical analogues.
You also have one which is called the Hadamard gate.
And you have, well, these phase change gates and so on. So there is some extra things.
But if you know the gate set, you can design algorithms that use those and then optimize.
So it's not that we expect much surprise what the quantum computer can do.
Depending on the technology, the expenses between these different gates differ.
So you can buy like one of those for seven of those. I mean, those ratios depend on what technology is actually used.
But it's not really that we're expecting much surprise there.
But then, okay, do we have enough people who have looked at this?
That is a bigger question. There could be some cool algorithms.
I mean, we just had an example of a non-quantum algorithm being surprising because people hadn't looked at a certain feature, which suddenly gave rise to something.
So it could just be that there is also something that you can do with a quantum algorithm.
And that is always the problem. I mean, with any system, it's only as secure as we know it.
I'm not going to overpromise here. You mentioned physics.
Of course, there is the area of QKD, which I see as an area which is massively overpromising because they have some statements about like unbreakable and so on, which will hold in some theoretical model, but haven't been shown in practice because all the physical associations, yeah, well, then your nice theory doesn't actually hold there.
So despite my warnings here, I'm still not so, I mean, I will stick for a proposed quantum cryptography.
So algorithmic cryptography to protect the Internet and storage and so on.
Yes. One of the systems that do have had a lot of time to actually be carefully reviewed by a lot of people for a really long, for a lot of years, is the classic Macaulay's.
So we have, we have talked in the past about lattices and exogenies on certain other parts, but I would like to ask you about what is the base of classic Macaulay's and what, what does, how does the algorithm work?
Yeah, I'm certainly happy to talk about that system full disclosure.
I'm one of the designers of it. I mean, not designers, I should say submitters because the design goes back to 1978.
So really to the beginning of public photography.
So it was defined in 1976, the whole concept, and then this RSA algorithm that I mentioned before is from 77.
And then Macaulay's is, yeah, well, just one year younger.
So 1978 is a long time ago. So it's like 40 years plus, it has really had a lot of attention, as you just said.
The classic Macaulay's system, which we now submitted to the NIST competition is indirect instantiation.
So it's using the same system, the same hardness assumption all the way down to the last level of EL, and then some optimization for the, well, for implementation for sizes, but for all of those, you can show that they directly relate to.
Now what's the hard problem behind it? I mentioned integer factorization for RSA.
So for the code -based, the classic Macaulay's, it's based on codes and that's not codes as in implementation, but it's codes as in error correcting codes.
So when you're transmitting information, there's always some redundancy added in order to ensure that you can actually decrypt it.
This is like for people as old as me, I remember like when CDs came out, it was magic that you could have a scratch on your CD and it would still play.
And people were showing like you can make a, take a big black marker and put something on there, it would still work.
And so this is the magic of error correcting codes. And of course, for all these applications, you use codes that are efficiently decodable.
So something where if there's an error, you can get it back nicely.
But there's also a theory of saying that if you pick a random code, which, okay, well, would need to go into a lot of detail, but if you don't design it particularly well, then it's actually not easy to decode.
Then it's seen as a hard problem to get the errors out.
And so what the MacLeese problem, the Copter system is using as a problem is this difference between a nice to decode code and a general code.
Okay, well, so what is behind is, is that we started with a nice code and that's your secret.
And then we transform it into something which looks general.
And so the hard problem is to, well, either decode this general code or figure out what we use as a transformation between these two systems.
And for the particular code that we're using, the binary Gopper code, which is the same that Robert MacLeese was suggesting, 978, no such, well, efficient untransformation is known.
Yes. One thing, of course, that some people have pointed out about MacLeese is the size of the, of it's keys that might be unfeasible to be used in real world.
But I know that you together with Daniel Vanstein wrote a paper about MacTiny and what kind of applicability is really in the real world could there be?
So what do you, what are your thoughts?
Thank you. So I, yeah, there is one concern. So these error correcting codes that typically use just bits, so zero and one, so that's nice for hardware implementation.
However, the public key, so the thing that I would need to put on the Internet and you would need to download if you want to send a message to me for the highest security level that proposed is about a megabyte.
Now we think this is quite reasonable because it gives you a very solid system.
You can go down the security level to something like a quarter of a megabyte and that's still probably more secure than anything else out there.
But if you're dealing with a megabyte, then today's Internet, I mean, we're currently live streaming this, which is a lot more traffic than just downloading one MacLeese key.
Each of your photos that you're taking, like that you're casually snapping somewhere, sending to your friends on your messenger, those are easily a megabyte.
So we are used to that data size.
It's not really a problem. But the concern that we had in the tiny paper is that when you're opening yourself up to receiving megabytes from random people, that they could try to do this as a denial of service attack.
So if you're in a scenario that you're, well, the concerned server, like your Cloudflare, and you might be having clients send you a megabyte of keys saying, hey, want to talk, then how do you do this in a way that doesn't cause you a flood on your memory?
And in particular, this is a denial of service attack that is hard.
It's not just the bandwidths where, well, I know Cloudflare has a lot of bandwidth, but it's targeting the memory.
And so what we're doing there is we are chopping up this public key into small chunks.
It's the sender of the public key who is responsible for getting those pieces out.
And the server doesn't have to allocate any memory.
So the server is getting a small chunk, just the size of an Internet packet.
And, well, you have to be able to handle Internet packets anyway. You never need more than that size.
You do a computation on it, and you send it back. The reason that this can work is that each packet will include a cookie.
So the server basically encrypts to itself something for later, and then it's the client who wants to send something who has to, well, provide those things.
And with that, it's actually pretty practical.
Of course, it needs, I mean, you still have to send the megabyte, so it takes several round trips, but you are secure against that situation.
And then, well, in many situations, many scenarios, you don't actually need a fresh key each time.
Maybe the first encounter with cryptography for many people, if you're actually caring about cryptography, is the PGP system, where you're getting the public key of somebody, and then you have it.
Okay, maybe there is a one year or two years timeout, and after a year you have to download another megabyte.
But if you're in a situation where the keys are stable, then that is not a problem at all.
And we're currently working on a system called PPConnect, where we're using the classic lease system as the identification key.
So not a signature system, not to sign, but it's an encryption system.
And we are the only party who can decrypt it, so you know you're talking to us if we can give you packets.
And so this is another system where you will be getting the key from the server once.
If it's some server you're talking to a lot, then you have this one megabyte amortized over a lot of connections.
And then with the security of the classic lease system, we are okay to using some other post-content systems based on lattices, which have then shorter term keys.
But the way we're doing it is actually different from most other systems.
We have the necklace system on the outside, then we have a loop-de-curve system, and then only we have the lattice -based system.
So an attacker who would need to get through to here would first need to break the classic necklace system, which we consider very, very secure, both against our current computers and against quantum computers.
If there's anything overlooked, well at least until the quantum computer is there, we have the loop-de-curve on the outside.
And then on the inside, that's where we have lattice systems which are faster for doing one -time or short-term keys.
And so in those scenarios, I think classic lease is easily applicable.
And then yeah, mixing it with something else for faster key exchange is useful.
You're muted, I'm sorry.
Oh, I'm muted. Yes, sorry. So there's a very interesting paper also about how to use classic Macaulay's with Warguard for the long-term operations by Hüslinger.
Okay, as we're coming to the last minutes, only my last question was going to be, what excites you about the cryptography?
There's too many things which are exciting, and exciting is not what you want to hear from a cryptographer.
Now, what I find interesting are, I like playing with asogenies.
I don't think it's ready for primetime. There's still some concerns. But if you pick it together with other systems that are more stable, then I think you can get benefits from there.
And it's mathematically very interesting. It has a very interesting profile.
And the other thing is actually getting it deployed. So this last thing, the PQ-Connect that I mentioned, that is something which we're trying to roll out pretty soon, which would be a way for normal people to actually get the benefit of post-quantum before some rather slow committees start moving.
So it's something that people, well, that servers can roll out, so we should talk, and that then clients can just install as a browser extension or as a kernel package, so that they have a connection.
And then it's basically hooking onto what these VPNs are normally doing, but it's not a VPN.
It's really all the way to the server.
So it's not something where you just channel to one party, which you know, and then they still send it out unprotected against quantum text, but it goes to whoever is willing to run it.
So for instance, if Cloudflare would be hosting it, then everybody who is connecting to a Cloudflare-hosted site would automatically get these benefits.
And similarly, if Google would start hosting it, then, well, there would be a secure tunnel from the end user to there.
And that's something I find exciting, and I hope, well, we see deployment.