ย Cloudflare TV

๐Ÿ”’ Harnessing chaos in Cloudflare offices with LavaRand & drand

Presented by Cefan Rubin, Luke Valenta, Thibault Meunier
Originally aired onย 

Welcome to Cloudflare Security Week 2024!

During this year's Security Week, we'll make Zero Trust even more accessible and enterprise-ready, better protect brands from phishing and fraud, streamline security management, deliver dynamic machine learning protections and more.

In this episode, tune in for a conversation with Cloudflare's Cefan Rubin, Luke Valenta, and Thibault Meunier.

Tune in all week for more news, announcements, and thought-provoking discussions!

Read the blog posts:

For more, don't miss the Cloudflare Security Week Hub

English
Security Week

Transcript (Beta)

Hey everyone, welcome on Cloudflare TV and today we're here with Cefan and Luke to better understand and dive deep in their blog, Harnessing Chaos in Cloudflare Office.

So first of all, I will let Luke and Cefan introduce themselves. Luke, maybe you can start.

Yeah, hi, I'm Luke. I'm a research engineer on the Cloudflare research team.

And I'm Cefan, coming to you from Lisbon, same, a member of the research team.

And maybe introduction for myself as well. I'm also a research engineer on the research team coming out of Europe.

I've been working with Cloudflare for some time in various cryptographic topics.

We'd be like Privacy Pass, Lamarent and others.

And maybe we'll start with a better understanding of what the title is.

So Luke, can you explain to us what exactly Harnessing Office Chaos, what is it about?

What's the high topic for the blog? Yeah, so for those of you who've ever visited a Cloudflare office or seen pictures online or anything, in Cloudflare San Francisco office, we have a wall of lava lamps.

And in some of our other offices, we also have these entropy displays.

So in this blog and in this segment, we're going to talk a little bit about how we actually harness that, the entropy from these displays and feed it into Cloudflare systems so it can help to secure the Internet.

Okay, so it's very much like a security features that we have.

I think Cefan, you had some like more details in terms of like, how it works, how it came to be, because it's like very pretty, right?

Sure. Well, and also just at a higher level, the why.

Why would we want to be using chaos? Luke already mentioned it's used to secure the Internet.

But how on earth could something that is this chaotic, reusing that word, and I'll change that word now to unpredictable, be useful.

And it really comes down to that notion that as we expect that the communications between secure partners, if there's a requirement that someone in between cannot guess what secrets they're using, if there's some algorithmic method in which someone, you know, an adversary can track in some way or predict what that secret might be, they may be able to insert themselves into that communication.

And so what we really look for is something that is really unpredictable.

And it turns out that most things that we create in the land of mathematics are, they turn out to produce patterns.

And patterns is a form of predictability.

Whereas when it comes to natural phenomena, and there are a variety of things, what Luke just mentioned, and Thiba referred to, these entropy walls, help us get it, you know, connected to a natural phenomena that cannot be predicted in the same way.

That seems so chaotic, that to our eyes, to our, you know, inherent impressions, that really the results that we, what we derive and how we harness that for randomness, gives us something that cannot be predicted by an adversary.

And hence, that's, you know, why we, we follow this path, not because they're pretty, but because they give us a means to harnessing something that is close to truly random.

And then to follow on to the rest of Thibaud's question, we've been using LavaRand for a couple of years, there have been established blog posts about it.

It's certainly something that has piqued many people's interest.

It seems fascinating that something so pretty can produce results that are so important to security.

And what we did is recently, we took what was an existing system and introduced further sources of entropy.

So our Cloudflare offices, the San Francisco one has those LavaLabs.

But we have other kinds of entropy displays. And what we did recently was add the double pendulums that we have in our London office.

And you can, you can see pretty pictures of that in the blog post that accompanies this segment.

And the turning mobiles, beautiful, beautiful patterns of colors that we see at our Austin office, similarly produce a random image that can be and is a source of LavaRand's entropy.

So just better to clarify my understanding. So like you're describing physical object, like, like LavaLamps was like bubbles that like floats like in a like liquid in the glass and like that get heated by the lamp.

Or like you described like double pendulum, which are like sticks and like randomly moves, or like some like, like all the displays like in Lisbon.

How exactly do we go from like this kind of like natural phenomenon of like physical observations to securing the Internet?

Because it seems a bit of a stretch, if you think about it. Luke, you're okay with me starting that?

And then you do the next bit? Yeah, please.

Yep. Okay. So, so the piece that's interesting is, is we have cameras pointed at these displays in our various offices.

And really what's happening is periodically, and a picture is taken by one of those cameras of that display of the beautiful random instance that is represented, and including the shadows and any other kinds of pieces that is captured in the image.

And we condense what is effectively image data down into a compact representation that is called a hash.

And something that's special about hashes is, and you've probably come across them when you've downloaded files from the Internet, often there's an accompanying hash to ensure that what you've downloaded turns out to be the thing you expect.

It turns out that hashes have this property that is that the, the original data, so that large, you know, comparatively large image data, a very small change in that image data will produce a quite different hash, a very different hash.

We take that hash and feed it into what is, again, it is an algorithmic motion, but a random generator, random number generator that is cryptographically secure.

And so we're seeding the kinds of numbers that we get out of that random number generator with this instance of randomness that we got from Labarand.

And we keep on doing that. Periodically, we keep on doing that.

So there isn't an obvious start point or an obvious end point.

We just keep on feeding into that pool of randomness. Yeah. And Luke is much more connected to how that is then connected to our servers and our methods.

Yeah. So, so once we have this, this hash of the image in the current state, we, we combine it with, I guess, a hash of the, or with the current state of the cryptographically secure random number generator, as well as some randomness from the system, the system's default random number generator.

And combine this to a new, for, to create a new seed for a new stream of randomness.

And a property of these random number generators is, is you can basically produce an endless stream of bytes from this, of random bytes from a smaller seed.

So, and as long as that seed is truly random and unpredictable, then the stream of bytes that you get from the random number generator is going to be unpredictable as well to, to some, it's with the caveat that it has to be a computationally limited adversary.

And the algorithm you use actually has to be a cryptographically secure random number generator, which we can't really prove for the algorithms that we use.

But we, we have pretty high confidence that, that it's the case.

So once we have this random stream, we can expose this as a internal API.

So then all the servers at Cloudflare periodically, I think maybe once a day, they can query this internal API and, and retrieve a string of random bytes from, from LavaRand, which is the name of the system.

And then once they have that random stream of bytes, maybe, maybe they take, get 128 bytes from LavaRand, which each, each machine that queries the internal API will get their own unique 128 byte block of randomness.

And then they mix that into their existing entropy pools.

So, so on Linux machines, they, the, the kernel maintains an entropy pool, which gets fed in from, which gets fed from various devices.

So I guess anytime you move your mouse on the machine, it will take the timings of the, and like the, the very precise timings, feed those into the entropy pool, just because they're unpredictable.

Anytime a network, a packet arrives on the network, the timing of that gets fed into the entropy pool.

So, so we, we also, so these are all sort of local sources to the server that we're talking about.

But with LavaRand, we were adding an extra external source of randomness to the servers and mix into their entropy pool.

Yeah. So that's, that's kind of how we augment this.

So, so if, if the local entropy sources on the server were to be compromised, then we'd still have this external source of entropy being fed in via LavaRand.

Okay.

So maybe like on that, I have like two maybe naive question is how is the system secure in a way that like, I, if I go to the Cloudflare office and take a picture at one moment, like would this compromise Cloudflare security?

And kind of like a second question, which is similar is like, it seems that Cloudflare has this nice LavaRand, like wall of entropy.

And like, they're putting like new gadget out in the wild, like in their various offices, so London office or like Lisbon office.

Is it something that like we could expect within Cloudflare data centers?

Like as like a part of a secure, like this is security for like that data center or something that like we should have at home to like protect our Internet connection?

Like what's the exact place here? Yeah. So, so for the, the question about somebody taking a picture of the LavaLamps and, and using that to guess what the inputs are to Cloudflare's LavaRand system.

Well, for, for one, it would be very difficult to get, I guess what, what Stephen was describing before is, is when you, when you take a picture of the LavaLamps, even a change of a single pixel in the image will produce an entirely different hash, which gets fed into the LavaRand.

So, you'd have to, you'd have to get the exact same picture in order to, to do that, which is, which is a difficult, a difficulty, but maybe not insurmountable.

But then the real reason that feeding this in, so even if you put the camera cap on the LavaRand system, or you, you disable the camera somehow, LavaRand is only adding extra randomness to the randomness that's already in the system.

So, so even if you were able to, I guess, disable LavaRand or, or somehow make it so that it doesn't add any new entropy to the system, still all of the other entropy sources would, would work as before.

So, it's only adding an extra layer on top of something that's already what we consider to be secure.

And, and Thibaut, I'll, I'll add that to the second part of your question about where those displays could, could appear.

When our computers or our devices connect to Cloudflare, our side is also performing some of the, the somersaults that require, are required to produce security.

And so it is, both sides are impacted. That's source of randomness, operating systems in general have their own sources of randomness.

This really is just Cloudflare's addition to what are considered industry standard sources of approximations of true randomness.

So, so modern chips have this built into them.

There are obvious questions about, about that.

But what I, what I think I'd love to drive towards is we've talked about public, excuse me, we haven't talked about public.

We've talked about private randomness.

We've talked about the kinds of things that really we expect to stay secret and the security of the communications require that those things stay secret.

If they're not secret, then, then all bets are off about the guarantees.

But what to me is more interesting is, and this is something that Thibaut, you yourself have done something quite, quite special recently regarding, is ways in which we could verify that the sources of our randomness were actually dependable, that you could actually trust them, that someone wasn't just making it look indistinguishable from, you know, random chaotic production of numbers, when in fact they have a secret to producing exactly the thing they want.

And on cue, they could, you know, put that lotto tickets in and those numbers would be drawn and they'd say, oh, surprise, we ended up, it looks to everyone else as if it's random, but it isn't.

And so the thing that that's, I'd love us to pivot towards now is we've, we discussed this private version that is indeed securing the Internet, but what other ways in which are there in which randomness can be used and promoted and used in applications, but that others can verify that in fact, this is fair.

This really is random. Thibaut, do you want to get us started with some of this?

Yeah, I mean, definitely. I think Stephen, thanks for wrapping up like the first section and moving on towards like public randomness and like what we're going to discuss.

And like, I think like, it's quite interesting, like the difference you mentioned between like, what could be like private randomness that like should be kept secret and like public randomness, which while being random is like verifiable and like they both have different use cases.

And maybe like to like probably transition from like the first section to the new one is, is there a way for people, like Cloudflare uses Labyrinth in a private way, but is it something that like I could use, like tap into an API or something that people could just go into and like start their own program?

Because they might not have like their own Labyrinth at home, or maybe they do, but like most people won't have it.

So is there a way to access it and like benefit as a developer from system Cloudflare has built?

Yeah, so I guess like we described, every one of Cloudflare's servers calls the internal Labyrinth API and retrieves fresh randomness from Labyrinth and incorporates it into their own interview pools.

So that means that all of the random numbers generated on each server are impacted by Labyrinth or they're, they are using randomness from Labyrinth.

So this means that any random number you get from a Cloudflare server, you're actually getting some randomness from Labyrinth with that.

And so one way to get to retrieve some private secret randomness from Cloudflare would be, for example, using the get random JavaScript function on top of Cloudflare Workers.

So we'll describe this a little bit more in the blog, how you can do that, or how developers can do that.

And yeah, so we've been talking about, I guess, so there are two kind of main types of randomness that applications will use.

There's private randomness, which you'd use for passwords, cryptographic keys, maybe randomly generated user IDs.

These are things you want to always keep secret.

And then public randomness, which is stuff that should be unpredictable.

But once it's published, then the random values can be agreed upon by everybody.

So we kind of alluded to lottery tickets being one example of this, that they should be unpredictable, the winning ticket.

But as soon as it's published, everybody can see this number and agree on it.

So we also have a way that you can use Labyrinth indirectly.

We have used Labyrinth as a seed for a system that generates public randomness, which I think we can talk about this in the next section here.

I mean, definitely, that's the kind of point where I'm coming to is, Cloudflare is doing a lot in terms of public randomness, et cetera.

You mentioned you can use it on workers. You can use it for, we use it on our own servers to seed the prime number, just random number generator.

And surely it's not a problem that only Cloudflare has faced.

And I think there's been an announcement at some point, a couple of years back in terms of Cloudflare participating with other organizations in terms of public randomness.

So maybe that's what you're alluding to, right? Yeah.

So back in 2019, let me first set this up. So with public randomness, like lottery tickets, one of the issues is you have to trust whatever authority is producing the random numbers.

Everybody has to trust that the random numbers they produce are actually produced fairly.

So in the lottery ticket example, there have been cases where lottery employees have co -opted the random number generator system for personal gain.

So their friends and family can predict what the winning lottery tickets will be.

And so some authorities like NIST is one good example, they produce a random number every, I think it's 60 seconds, they produce a random number, a public random beacon.

So if everybody- So just before we continue, what is NIST?

Oh yeah. So NIST is the National Institute of Standards and Technology.

So they're famous for standardizing algorithms like AES, Advanced Encryption Standard, and SHA, which is Secure Hashing Algorithm.

So they're in the U.S.

a standardizing organization. So a lot of people would probably trust NIST.

So maybe their public beacon is fine for a lot of applications.

But I think there are also some people who wouldn't want to trust the U.S.

government's random number generator. So back in 2019, Cloudflare and seven other organizations launched this distributed public randomness beacon project called DRAND.

And I guess the League of Entropy is the group of organizations involved.

And with the DRAND protocol, you get this guarantee that if at least half the participants or some threshold of participants in the system behave honestly, then you can be sure that the produced randomness beacons are also actually randomly generated and nobody can predict ahead of time what they'll be.

And another property is that- I just want to jump in one second just to help frame this for those who heard the lottery example.

It's super useful to explain that when a lottery number is announced, the closest we get to it being verifiable are things like seeing a display on television sometimes where the balls are blowing up, are landing in various buckets, and there's some sort of physical demonstration.

But there really isn't in those sorts of circumstances. And there are examples, Luke already alluded to some of the, you know, putting your weight on a particular scale in a particular way to get an outcome you want.

Even in those cases, some balls have been found to have been weighted, right?

This is an existing problem with this notion that you're trusting a single entity, those that are running the lottery, to do that number generation fairly, right?

So DRAND is offering you a different alternative, right?

It's a number that came out of this league where all the participants are involved in verifying that it's actually fair.

If it hasn't been compromised, if more than half of the participants have not decided to collude, what you get really is something that you could say, this is something that is fair, which is a different, you know, is getting us, I think, closer to what we as consumers of randomness would expect, but isn't the case in many, you know, it can be slight of hand in many other places.

So sorry about that, Luke, but please continue.

No, absolutely. Yeah. Yeah. So that's the sort of guarantee that you get from DRAND is you still have to trust that at least half of the organizations involved are being honest.

And you have to trust that the code is correct, et cetera.

It's all open source. And actually, the DRAND code base has been audited as well.

So it's a, yeah, you still have to trust.

There's still some trust involved, but hopefully by making it a distributed system, it makes having that trust a lot easier for consumers.

So if you have an application that needs public random values, you should definitely check out DRANDs and the APIs that we have for that.

Well, can we set you up for telling us more about ways in which that can be used in application or applications that might, you know, what else can be built upon this public randomness?

Yeah. I mean, definitely. I think the, like, just like reframing the various, like, trust, like assumption is like the three systems that have been mentioned.

So like for the lottery, like you can either have, I mean, let's say like a lawyer or like someone like that would assess that like the draw is valid.

You could like use NIST for like certain use case, or you could use DRAND depending on like what your strength model is, I imagine, and also how fast you want it or like what the discussion you might have, because sometimes it might be easier just to pass through like a normal assessor.

And maybe to get back to Stephen's question, what are the use cases beyond lottery?

Because we've discussed a lot of, like about lotteries.

Like one of the interesting use cases of having cryptographically computed randomness is that you might be able to reuse it for all the cryptographic based protocol.

Because of course, if you like, you have like a set of balls, 10 balls, and like you draw it doesn't have, I mean, it has some like mathematical properties and physical properties, but it's like very hard to derive some like known and like secure cryptographic properties.

While if your mechanism in like to draw the ball is just some numbers and some cryptographic primitives, it's easier to like build it and like feed it into other protocols.

And one of the protocols, and that's me talking about like the identity section, but one of the other protocol which is like particularly interesting is like to do time lock encryption.

The idea about time locking is you have like a message that you know now, and you want people to be able to know like in 10 minutes, in one hour, in one hour.

And the principle of encryption is that this bottled message needs to be secure, so like it needs to be secure over time.

And one of the things you can do is like public randomness is doing actually some kind of time lock encryption.

The idea being that while the value is random, the cryptographic protocol and values that like sustain it have some sort of like predictability in a way.

It's very weird to like state it like that. It's random, but somehow predictable.

It's kind of if I say like the next draw for like a lottery would be a number between one and a hundred, because it's always between one and a hundred.

So like these are kind of like the rules of the system that we know. And knowing that, we can devise some property of the number that like will be revealed at a certain time.

And knowing properties of that numbers allow us right now to like take a message and like kind of like send it, like only seal it, so it can only be revealed by a number that will be revealed in the future.

And so that's one of like the really cool application I think of public randomness.

Once again, it might not make sense just like speaking about it, so go have a read of the blog, like put some details, some examples, and like you can use it.

And that's definitely like a very nice application, I think of public randomness.

We have seen one example of this, of time lock encryption being used for a vulnerability disclosure, for example, where the vulnerability report was written ahead of time, and they published the encrypted block and said, okay, on this state, this vulnerability is going to be made public to everybody.

So I guess that's a way to put pressure on the developers and the maintainers of the system to actually fix the problem, which I'm not sure if I recommend that approach, but it's certainly an interesting way to use time lock encryption.

And I guess another thing to note is, it's not like every public random beacon doesn't have the specific mathematic properties needed to build time lock encryption on top.

It's, but the DRAN protocol has been has been designed in a certain way, so that it does have these properties.

So we recommend you read the blog to learn more about that.

Thibaut, is it worth mentioning something about Filecoin, that DRAN has other consumers of its public randomness, that it isn't just these somewhat more esoteric research-based topics, that really, there are, you know, use cases in industry that can benefit even now from, and I would say, also provide motivation for those that support it to keep it running, to keep on making sure that what is coming out is this verified public randomness.

So regarding users of DRAN, there are definitely various use cases.

And I think they all come back to the question that we have, instead of like, should you use an assessor for your protocol?

Should you use NIST?

Should you use something else? And at least for certain distributed ledgers have found that having as well, like a distributed way to make like randomness for a system, which is inherently like very predictable, was pretty interesting.

And so like that's, like to come back to your example, what secures, like partly secures Filecoin, or is integrated with like all the blockchain-based ecosystem.

There might be other applications beyond blockchains, but it really depends on like the trust model that you have.

And if you actually need like the cryptographic properties that underlie the generation of these random numbers, TimeLock was like a good application, which is kind of detached, like from blockchain.

You might have like all the application down the line, and it really depends on like what comes in and what's your threat model.

I think it's very much like a cryptographer. And so here is like, it depends what you need, and you need to assess your own situation.

Yeah. One thing that I think as like a generic, at least to me, this is the high level notion that public randomness provides is, if you as the interested party publish ahead of time, a priori, you're publishing what your criteria are going to be, given a random number, right?

It could be heads or tails. If the number ends in a, it's an odd number, it's a tail, that sort of thing.

You end up with, anyone can verify that you followed the rules that your algorithm described, and anyone can feed the random number that comes from DRAN into your algorithm, produce the output, and you have something that is verifiable, whether that's the order of lessons in a school, the order of candidates on a ballot paper.

Anything that requires something that we're depending on the good faith of someone to do the thing that we assume they'll do, which is to be fair in their allocation, those questions can somehow be removed or made more trustworthy.

If you, as I say, you publish what your method is going to be, and you let the randomness drive, whatever the result is.

Yeah. And please involve your cryptographer in that process, because they feel like a very subtle attack that can be put, like when discussing cryptographic protocol and random number generation, things have been like a couple of CVs around that.

So involve some cryptographers in that process to better understand the model of what you're using.

Yeah, so I think that's everything we have for this segment.

So we thank you, first of all, thank you for watching.

And we encourage you to read the blog post that accompanies this segment and enjoy.

Thumbnail image for video "Security Week"

Security Week
Security Week is one of Cloudflare's flagship Innovation Weeks, and features an array of new products and announcements related to bolstering the security of โ€” and ultimately helping build โ€” a better Internet. Tune in all week for deep dives on each...
Watch more episodesย