Can We Ever Be Safe on The World Wide Web?
Originally aired on October 17, 2021 @ 10:00 PM - 10:30 PM EDT
Best of: Internet Summit 2015
Cloudflare's John Graham-Cumming talks about the future of cryptography with Adam Langley, Security Engineer at Google and Richard Barnes, Firefox Security Lead at Mozilla.
- Adam Langley - Security Engineer, Google
- Richard Barnes - Firefox Security Lead, Mozilla
- Moderator: John Graham-Cumming - CTO, Cloudflare
English
Internet Summit
Transcript (Beta)
Happy Music My name is John Graham-Cumming.
I'm a programmer at Cloudflare and have been at Cloudflare for almost four years now since it was a fairly small company.
And I often get involved in security things, although I like to say that I just play a cryptographer on TV.
I'm not actually really a genuine one. But I have a couple of guys with me who know a lot more about security than I do.
There's Adam Langley, who is a security engineer at Google.
If he's at all a household name, it's for having helped discover Heartbleed, which was a terrifying bug in a piece of software called OpenSSL, which we all use without realizing it, and which kept me up all night in a hotel in Lisbon trying to figure out the impact on us.
And Richard Barnes is Firefox security leader at Mozilla.
And so obviously the Mozilla browser and Firefox itself are very important to many people, and the security of it is very important.
Between these two guys, they're covering a span of our Internet use, which goes from the client, the browser itself, all the way across the Internet through servers.
One of the things that Adam is involved in is something called transport security, which I think is something that we haven't thought about very much, although some people who'd like to understand what we're doing and spy on us have thought about.
So I thought we might start with that. Adam, what is transport security?
Oh, gosh, I'm quite loud. If you consider the chain of things that happens when you load a web page, it goes from the user to a browser, out to their computer, across the Internet, to some server somewhere which handles the front end, and then usually to some database elsewhere.
Of that whole chain, there can be problems all along the way.
Right now, people might be talking about Ashley Madison, which was a security issue way at the very end of the chain.
But transport security is getting things from a person's computer to where it's supposed to go, to the website, and getting it there such that other people can't see it, other people can't change it.
How was Heartbleed a concern for the public?
How did it get on CNN? I don't often see those sort of bugs appear on television, but this was a pretty big deal.
Why was it a big deal? Heartbleed was interesting because it was very bad, but it was not perhaps fundamentally that much worse than bugs that we've had before.
Even just in the same components that Heartbleed was in, a couple of years previously, there was another bad bug.
It wasn't information disclosure, but with it, had you known about it, you could have caused some real problems for big websites, including Google.
It was dealt with quietly, and there was a security release, and there was not much fuss.
What really made Heartbleed different, I think, was that it was marketed.
One of my colleagues, Neil Mitter, discovered it at Google.
During that process, as happens certainly with scientific discoveries and apparently with security vulnerabilities, somebody else co-discovered it.
Security researchers, it's not directly compensated a lot of the time.
Much of the time, their currency is reputation and such. These co -discoverers decided essentially to market it, which is not necessarily a bad thing, but it is the first time ever that my mother's known what I do.
Indeed, it blew up to an incredible degree.
It set a pattern for bugs to have telegenic names and logos, and for discoverers essentially to promote themselves through their discoveries, which given that often they don't get paid, might be reasonable.
That was really the new thing about Heartbleed. I think in addition to the branding, the other watershed aspect of Heartbleed is that it brought attention to how much of the Internet infrastructure is dependent on a very few components, OpenSSL being one of them.
You can imagine others like Apache and Nginx, things that are deployed everywhere across the network.
If there's a flaw in one of these, it affects millions of users, thousands of websites.
I think one of the positive impacts of Heartbleed and its marketing has been to draw a lot more attention to those critical pieces of the Internet infrastructure which are often open source and drastically under-resourced.
I think we've seen a lot of benefit from getting more attention to those core products, more attention to those core projects that run the Internet.
I think that's something the whole industry should continue to keep an eye on.
What's interesting in your two answers is neither of you said what Heartbleed was.
I think I'm not criticizing you for that.
One of the problems with security is it's very, very hard to describe really what's going on.
I think that itself creates a problem, doesn't it, in trying to convince people, well, you need to pay attention to this thing that I can't actually describe to you what it is, but it needs to be important.
How are we going to do that? How are we going to help people understand the risks they're facing when they're using the Internet?
Using a browser, using an app, which is using some sort of connection.
What can we do to make that better?
The best answer I've been able to come up with that is to try and build more and more accessible and more and more accurate metaphors.
I was talking to a group of congressional staffers earlier this week, and I brought up Ted Stevens' maxim that the Internet is not a big truck.
It's a series of tubes, which is sort of true, but it's also sort of a big truck.
As Adam brought up transport security earlier, the way a browser protects your web transactions as they go between your computer and the web server that's thousands of miles away, it's very much like an armored truck traveling through an untrusted terrain.
It's reliant on a faithful driver and good locks on the doors, and there's some direct analogies you can draw like that.
It's tough, though, to make sure that those analogies are accurate and lead to correct conclusions.
I never actually understood why the series of tubes statement got ridiculed as much as it did.
I mean, it seemed reasonable enough, and people have said much dumber things in the past and gotten away with it.
I mean, people to the most extent are not going to be changing, and security is not going to be getting any simpler.
So my hope is not to have to explain it to people, but to get it right.
I think each time we ask a normal person what to do or try to warn them, I'm hesitant to think that people will ever be in a position to make intelligent choices there, and I think any system which requires that is flawed.
I think it is important as well to keep in mind that while computer security and information security and transport security involve lots of new technologies that have only been around for a couple of decades, security in itself as a practice is not a new thing.
There's all sorts of pre -electronic practices around securing transactions and securing contracts and things like that.
And so I think President Ives talked about disconnects between communities.
So if we can step out of our bit-oriented, electronics-oriented security community and look at some of the historical ways that security has been achieved, I think there's some analogies that can be drawn to help folks like policymakers and lawyers understand things better.
Well, just to stay on that, to go back to your analogy about armoured cars, I think when I was a child, armoured cars were quite different to how they are today.
They've got more and more secure in different ways and there are different technologies being applied.
And I'm sure somebody who worked in the armoured car industry would know all of the things that have happened over that period.
But for the person who's outside of a technical domain, it's very, very difficult to keep up.
And right now, we're going through this.
Many of you will have heard of this and the ones who haven't are going to say, what is he saying right now?
The SHA-1, SHA-2 switch in all our certificates. And to a certain extent, we need people to be aware in some way that there's this thing happening and it would be better to be using a modern browser or upgraded cryptography.
How can we help them with that sort of thing? I think there's an overall trend across a bunch of aspects of security of liveness that security is about things that happen in real time.
Someone can transition from being a trusted party to being an untrusted party in milliseconds.
And so security inherently has a real-time aspect to it and inherently, you need to be doing the right thing at the time you're doing the thing.
So I think there's that challenge of keeping the infrastructure up to date, keeping it so that it's doing the right thing given the realities of what we have now, whether that's who the trustworthy actors are in the system or what the technologies are that are trustworthy.
So I think there's that challenge and we'll need to figure out as an industry how we address that.
I'm actually, I've got some hints of optimism around this problem, which is rare for me.
But so cryptography went through its inflation phase in the 1990s where suddenly, well, at least non -military cryptography did, where suddenly it became possible and lots of things were done and they were terrible, by and large.
1990s cryptography, we didn't know what we were doing.
Many, many mistakes were made. And we're really still living with that legacy.
And the pains we're going through right now with the various deprecations, killing off SSLv3, killing off SHA-1, are really trying to clear out 1990s cryptography.
So there is, I guess, a metaphor of a race between attackers and defenders where each one has to run as fast as they can to stay still.
But I hope by, you know, within a few decades, perhaps once we've gotten the quantum apocalypse done with, it should be the case that defenders fundamentally win this race.
It should be the case that what we're deploying now is not just like what we were deploying in 1990, but, you know, 20 years later.
It really should be the case that we can fundamentally win this. Now, there are many years of pain to go through while we clean up our mistakes.
But I hope that, like I said, two or three decades' time, fundamentally, cryptography, we're sorted, we're done, it works, and it's a transition phase, and we can get through it.
So I'm not going to be allowed off the stage unless I mention Edward Snowden at some point, because that's, you know, the rule if we're talking about cryptography.
I've heard quite a few times people say, Snowden changed everything.
Did he change everything, do you think? And if so, I'm assuming the answer is yes.
What changed with his revelations? I think I was kind of curious to watch the reaction to Edward Snowden in the security community, because to a large degree, those of us who'd been doing security and watching what security risks were present had been saying this for decades.
We'd been saying the Internet can be watched by anybody, and the only news that Snowden brought to the table was that someone was actually doing it.
In that regard, I think it's been a tremendously valuable awareness-building exercise, much more so than anything the security community had done in the past few decades.
I 100% agree with that. I remember, I was here in California in the Cloudflare office when that story broke, and I remember sitting there in the office and thinking, ah, all the things that we said were possible, they were actually doing.
And that was sort of the thing, and I was like, yeah, but what did it change from the public perspective?
Because I do think, to a certain extent, people have got a little bit more awareness of what's necessary to stay secure.
So I think it communicated far better than we've ever managed what we have been saying for all this time.
I had to update my priors. The NSA was much more aggressive than I would have assumed had you asked me the day before it came out.
So I got some things wrong there. I think what happened is that trends that were already going were kinked upwards.
The deployment of HTTPS on major websites has kinked upwards significantly because of that.
It was already going up, but it's very pleasing that it's going up substantially faster now.
I think most security professionals, had you asked them to envision, if I gave you the NSA's budget and this many people, what would you do, I think what came out would be pretty close to what they would have thought about.
So one of the things was that there was no alien technology.
Well, I mean, it's the NSA. We kind of suspected there might have been real alien technology there.
We don't know that we have all of their documents.
True, we don't know there's none. Alien technology is a metaphor.
I think that was significant in that it gave us as a cryptographic community a little bit more confidence that we had a grasp on reality and that there weren't these people off in Fort Meade who knew infinitely more than us and we were just like children to them.
The mathematics seemed to be the same in both worlds. Yes. We had an actual grasp on reality.
And so that's been somewhat comforting, although obviously that's been outweighed by the discomforting parts of the revelations.
I think another impact of this awareness raising is that it's led people to take a look at the network and look for other similar sorts of things.
What else is the network doing to my traffic as it goes through?
Yes, there's the NSA and GCHQ hoovering things up and collecting them for national intelligence purposes, but there's also my mobile provider and my content networks.
What are they doing with my data?
And it's led to the discovery of some things that people are doing and further motivation for kicking up the deployment of these security technologies.
We heard in the case of Marriott Hotels who were modifying what was happening as they used their Wi-Fi, which seems like a small thing, but it's a good indication that your traffic can be tampered with.
My canonical example is if you fly Southwest Airlines, I have no opinion on whether they're a good airline or not, but if you use their in-flight Wi-Fi and you go to a non-encrypted website, they'll helpfully inject a bar at the top that says, you are this far along in your flight.
And that's really useful, I've found, in talking with people to say, look, they can put things in the webpage.
And they can't with the encrypted website. So what is encryption? Fundamentally, once we get to a certain level of encryption, there are certain things you're not going to be able to do.
So Cloudflare, for example, one of the reasons we're building out all these data centers is we need to be able to cache all over the world because if it's encrypted, well, we need to have the encryption endpoints everywhere.
What else is going to change once we get encryption everywhere?
So we do have somewhat of a tension building in that the unencrypted Internet, it was common for the network essentially to play jiggery-pokery with the packets and pretend to be the person you were trying to talk to and actually answer what they would have said more quickly.
And once connections are protected, that's no longer possible.
For both good and for bad, many of these intermediates were subtle.
When they had problems, it was very hard to diagnose. They caused a lot of problems, but they did save a lot of traffic.
Now that we have the average website is growing in size about 15% a year, I think, at least if you believe one of the studies, the tension between websites catering for first world country consumers who have broadband Internet connections in terms of the size and weight of their webpages in lesser developed areas of the world where bandwidth isn't as plentiful, there is a real tension now between encryption which defeats this in-network caching and security.
It's really troublesome.
Ideally, the webpages would not grow quite as fast, but people want their features.
There's nothing that comes with this extra weight. It's not clear how that's going to be resolved.
The way I tend to frame this is that security technologies are good at drawing binary distinctions.
If a piece of data is encrypted, if you have the key, you can get at it.
If you don't have the key, you can't.
What that means is that what encryption doesn't tell you by default is who gets the key and who doesn't.
Who's in, who's out. What we're seeing right now is that in the web, the default answer is that the browser has the key and the web server has the key.
Those two parties are the ones that are interacting and anyone else has to get permission from one of those.
As Adam was saying, historically, before we added encryption, there was this whole network of entities participating, helpfully scanning your traffic for viruses or caching things as they went through.
When you turn on encryption by default, all of those parties are locked out.
They can't do their job. What encryption is doing now is taking this network of relationships that was previously implicit, it wasn't visible, and forcing it up and forcing it to be something where users and websites now have to assent to this and explicitly agree to include these parties in the transaction.
I want to just change gears very briefly to another area to worry about.
Part of being involved in anything to do with security is you have to worry about things and at the same time you have to read the press and not worry quite as much as the press says you should worry.
For Internet of Things, where right now we're being told the sky is going to fall and my car is going to drive itself home and things like that because someone would have hacked it.
How should we worry about what are the things we should be thinking about from a security perspective as we put toasters, cars, baby monitors, everything else connected to the Internet?
To a very large degree, when people ask me questions about the Internet of Things, I struggle to see what the difference is.
The Internet is already a network of things.
We have an Internet of Things already. There are those things in our pockets or on our desks.
Really all we're talking about is smaller things, same I, smaller Ts.
That's not to say there's no difference, but a lot of things, when you look at a lot of aspects of the system, there's not that much difference.
At the cryptographic level, doing encryption is cheap enough that basically all processors you can buy nowadays can process encryption fast enough.
There should be no question of changing at the transport security level.
Where things are going to be more interesting is once you get above that level of the basic bits-on-the-wire sort of things.
You look at what information is being collected, how is it being collected, how is that software being assured, and how can people get a grasp on all of this information being collected and how it's being shared and processed and protected.
I suspect the friction we're going to see largely with the Internet of Things will be that the set of manufacturers who build complex technical devices is going to be expanding.
They're going to have to shift their mental model, and it's going to take a while before they get there.
Right now, they have inventory, and they use sales to convert inventory to money.
Apart from perhaps some warranties, that's pretty much how things go.
Now, when they sell a device, they get the money as they always have, but they've also got a liability.
If they're going to be good manufacturers, they've got to be able to update that device.
It will have security issues, inevitably. So they've got to maintain a build environment for it.
They've got to update it, and they've got to understand that these costs need to be built in.
Right now, the typical way to handle these costs is to try and ignore them, sadly.
These manufacturers have also got to go through a transition that many tech companies have over the past decade or so, where we've come to, hopefully, a mutually beneficial relationship with the security research community.
Nowadays, when a researcher reports a problem, the response is not threats and suing, at least, hopefully.
But many manufacturers of these devices who have typically manufactured microwaves and fridges and such, hopefully, they get it right from the get -go.
But history suggests some education will be necessary.
And that, I believe, is the transition that we have to get through.
And maybe once we're there, like you said, things will be much like they are today, just with more devices.
Well, the Internet's always been growing. We're used to that.
And hopefully, they will auto-update, like our best-of-breed software does at the moment.
And things will be much the same, which is not necessarily great.
But it is not the world falling down. Really, I think a lot of what Adam's talking about is the collision of the security treadmill we were talking about earlier, this need to be real-time and up-to-date, and that coming into conflict with industries that have been used to building one thing and having it work for 10 years.
I think a lot of these businesses just don't realize they are actually software businesses now.
That's fundamentally what's happened, where it's crept in and it's taken over everything.
Well, I wanted to ask you one last question, which is, given that you two guys are experts on security, how do you personally walk the walk of being secure?
So what do you do? Is your password Hunter2 by any chance?
What do you do? You see that all I hear is asterisks. Exactly. There you go.
So how do you approach it personally? I am not doing anything particularly outrageous or perhaps in places.
My password policy is that passwords are randomly generated.
A few sensitive ones are just memorized. The rest are kept in a password manager.
My things like my phone and such, I don't put anything too sensitive on it, so I compartmentalize that way, and maybe it's a little bit inconvenient, but on the other hand, not having corporate email on my phone is actually really quite relaxing.
At home, I have a Linux machine and a Cubes machine, and the Cubes machine is probably the most esoteric thing I have, which is an experimental operating system using lots of virtualization to compartmentalize things, but in general, people shouldn't do that.
Everything else, I think, is relatively attainable.
My story is basically the same. One thing you didn't mention was two-factor authentication, which is something I've been very aggressive on lately.
I'm finding all of the services I use and will let me turn that on and enabling it.
Honestly, I also use a lot of paper. I do a lot of the same compartmentalization strategies using different computing contexts, different machines, or virtual machines for different things.
For some things, I don't put it on the net.
I don't put it in electrons at all. Great. Well, thank you.
I hope we've got a moment or two for some questions. We got anyone in the audience who wants to ask a quick question of these two guys?
Looks like we do.
While the microphone's going there, I thought it was interesting that the Washington Post leaks from the NSC were typewriter documents.
I don't know if that was operational security or something.
They probably just haven't come around to upgrading yet.
That's possible. Well, but keeping in mind that there is nothing new in security, there were typewriter exploits back in the 1970s.
Yeah, absolutely.
When you have entities, government or otherwise, saying the downsides of encryption, crimes will go unsolved and things like that, how can you describe the benefits of it to users who don't understand the things that we do?
Simple terminology, a way to get them to do the right thing.
Users are reluctant to upgrade browsers and such, for example.
Did somebody else get the question? Sorry. Could you maybe repeat the question?
A number of entities say encryption's bad, crimes go unsolved, things like that, if the information can't be accessed in the means you described.
But at the same time, people who don't know the situations hear those things, think it's bad, and then users are discouraged from doing those things.
How can we, as people in the industry, bridge that gap? There is a discussion going on right now in many countries about how encryption should be deployed.
I think that's an ongoing political discussion, so I'm somewhat conscious that I should not comment too much.
I think a whole bunch of people have done a really good job in our community at reaching out to governments and trying to evangelize, essentially, what we believe, which is that, overall, this is a good thing.
Matt Blaze, for example, in the Keys Under Doormats paper, was excellent, and we might not see it because it's going on in D.C., more or less, and D.C.
works differently.
But it is happening, and lots of people have been working really hard on that.
I mean, you live in D.C. Do you see more? No, I think that you're on the right track.
I think it's been, in the debate we've been having lately, the contributions from Matt Blaze and from this industry overall have been really helpful, and they've been welcomed by the folks who are trying to figure out what the right policy is.
Users, I think people on the street realize that there's things that they don't want the world to see.
And as we were saying, one of the things that Snowden Revelations revealed was that, indeed, if you were doing things online, it is seen.
But yeah, I think the conversation in D.C. has really benefited from the engagement that the technical community has done, and more would be helpful.
Is there another? One more question. Okay. Sure. Really quickly here. So there's two separate kind of camps happening in the industry today.
You have the big data crowd, and then with things like Snapchat, you have the no data crowd, where things are ephemeral.
It seems that there's probably some middle ground there. What can you guys say towards helping compartmentalize that?
Because you talked about deliberately not putting things on the wire, deliberately not putting something in a place where you would protect it with encryption, by just omitting the data altogether.
So just to answer that very briefly from my perspective, I actually use an application called Wicca, which is a texting application, and it uses cryptography in a way to make messages ephemeral.
So we're using cryptography for another purpose, and it is incredibly liberating to say, I can send a message to this person, and there won't be a record of it in the next hour, or however long I set it.
It will disappear because of mathematics. So I think you can use the mathematics for that benefit as well.
I think an area where there's a lot of opportunity is looking at ways where service providers can deny themselves some information.
Maybe they figure out who John is talking to. They can see that a message has gone from John to Adam, but they can't see the contents of the message.
They only know the minimum amount they need to do to deliver the service that the user is asking to deliver.
I think looking at ways that cryptography and security technologies can be applied to create that minimum privilege, that minimum access level, I think there's a lot of space for innovation there, and in helping users understand what properties you're achieving that way.
What exactly does your service have access to? What exactly am I exposing to you by using your service?
Okay, so I'm going to have to wrap it up. I feel like just given the feeling in the room that we could talk for another hour about this topic, it's something that obviously in Silicon Valley is extremely sensitive at the moment to discuss, but thank you, Richard.
Thank you, Adam, for coming here and talking about these things.
A botnet is a network of devices that are infected by malicious software programs called bots.
A botnet is controlled by an attacker known as a bot herder. Bots are made up of thousands or millions of infected devices.
These bots send spam, steal data, fraudulently click on ads, and engineer ransomware and DDoS attacks.
There are three primary ways to take down a botnet by disabling its control centers, running antivirus software, or flashing firmware on individual devices.
Users can protect devices from becoming part of a botnet by creating secure passwords, periodically wiping and restoring systems, and establishing good ingress and egress filtering practices.