Cloudflare TV

AI, Democracy and The Evolution of Internet Security with Bruce Schneier

Presented by Bruce Schneier, João Tomé
Originally aired on 

In this episode, host Joao Tome and cryptographer and security technologist and public policy lecturer, Bruce Schneier discuss the evolving landscape of Internet security. They explore AI-related cybersecurity risks, the impact of new technologies on democracy, and the current state of the global Internet. Schneier examines how new technologies are impacting democratic processes worldwide and provides insights on balancing the delicate balance between privacy risks and benefits in the digital age. The conversation also covers the need for updated regulations and the future of post-quantum cryptography.

Read more:

English
AI
Democracy
Elections
Hacking
Post-quantum
Privacy
Regulation

Transcript (Beta)

Hello, everyone, and welcome to our show. Bruce Schneier has been around security for a while now.

Some call him a security guru, with 12 books, hundreds of articles, essays and papers.

You have done a lot, sir. Hello, Bruce, and welcome. Thanks for having me.

You have a very big career in terms of different things you have done in the past.

A lot of people say this, like, Internet was not built at the beginning, thinking of security.

How do you think things have evolved since then, since those beginnings?

Well, we all know the story. We needed security, so we had to add it in.

It's a little bit, it's not really fair to say they didn't think about security.

They did, and they didn't put it in the Internet because the Internet was built for computers that were secure.

Right? So, it's being built to connect mainframes that have security, have accounts and a lot of security.

So, security was at the endpoints.

So, they thought about putting in a network, but realized they didn't need it.

So, that's when they didn't bother. Problem is, in the 80s, we started connecting single-use computers that were never designed to be connected to networks to a network that was never designed for single-person computers.

That's where the problem happened. A little unfair to blame the founders.

They didn't put security in the Internet, but they didn't have to when they were designing it.

And they were not thinking also on the success that it would have in the future, right?

The layers that would be added to it because of its success, in a sense.

So, that also played a role, right? I mean, they had no idea. They were building a very general network to connect mainframes and users of mainframes.

Yeah. The Internet just sort of exploded, and the applications were far greater than anyone anticipated.

And the security in the Internet, which is pretty much nothing, was not up to the task.

And we've been spending the next, what, you know, four or five decades trying to retrofit security into a network that was never designed for what we're doing with it.

One of the things, working at a tech company, I find it really interesting is that older protocols are being used right now sometimes as new protocol.

They were around. They were simply not as much used or used in a different way.

So, sometimes, older protocols are still being helpful in a way.

And things go in and out of vogue, right? Sometimes, a protocol is popular, and it's not being used, and it's popular again for a different purpose.

The Internet is filled with all of these standards and things the network does.

And if you are doing something new, you're going to grab a piece of what's out there and use it for something different, right?

That's the gender of nature of the Internet.

We like that. But it does bring a lot of insecurities. In terms of we're in 2024, a lot has changed in the past few years.

What surprised you the most in the evolution of security online, really?

I don't know, Hakan, if anything surprises me.

It might be the fact that we're still fighting the same battles we were 20, 30 years ago.

You think we get better, that we're learning the same lessons, that new industries come online and make the same mistakes.

The car people are making the same mistakes the computer people made in the 90s.

And the IoT people are making the same mistakes that were made back then.

So there's not a lot of learning. And it's our rush to deploy tech.

We just don't think about security, or reliability, or any of those why characteristics that don't directly translate to revenue.

This morning, I was in the conversation where the legacy part was emphasized, like the fact that in 2024, there are still a lot of companies that are like 20 years ago, in a sense of cybersecurity of the way they connect to the Internet.

What do you think?

Are you like thinking that even today, a lot of companies are not still in 2024, if you want to think about high tech of the best approaches to deal with security?

What's happening now is a lot goes into the cloud. So even if you as a company, don't know anything about it or security, you're getting your services from cloud providers.

And they're supposedly doing the security for you might be not doing a great job.

And legacy is getting worse. Our computers we replace what every five or so years, our phones every three years, your refrigerators in the last 25 years, right?

You buy like two and a half refrigerators in your lifetime. That's it.

Your car, you buy it, it gets sold, it's used again 40 years from now that car still on the road, toys, all of these things have much longer life cycles than we're used to in IT.

Now you go find a computer from like 1984 and try to boot it up and make it secure.

That's going to be the automobile industry. And we don't know how to do that.

That's an industry where things are a little slower too, right?

Although there's players bringing new trends around, right? Well, I mean, they're turning a car into a computer with four wheels and engine.

So now the question is, will it live on the car life cycle, which is, you know, their classic cars on the road today, will live on a computer life cycle where, you know, Microsoft and Apple depreciate operating systems after a decade because they don't feel like supporting them anymore.

You know, they do the latter with cars. That's going to cook the planet.

We just don't have the ability to junk cars at that rate.

So is there a way to replace the innards and keep the metal outards? This is stuff we're going to have to figure out.

Is there any other industry, you mentioned cars, that worries you in terms of cybersecurity that you think could be liability?

I mean, pretty much anything that's old school, right? A lot of utilities are in the news, whether it's power or water or, you know, infrastructure, right?

Right. It's right. Because that's a lot of old stuff. It's hardware, it's legacy equipment, and it's, you know, Internet, industrial Internet of things, which even get less attention than the Internet of things.

You know, there's a lot of stuff to worry about.

I try not to put stuff in worry order just because that seems like a fruitless errand.

Of course. RSA is going on right now. You come here a lot.

In a way, what are the things this year that you saw, if you saw any, that you think are new trends, are new ideas, new fields in cybersecurity?

It's really hard to find new ideas.

Not because they're not there, but because the show floor is so crowded and so noisy.

I mean, going down into that pit of like 500 something companies is really overbearing.

And I can't imagine someone coming in and trying to learn about the industry, learn about what's possible by walking around the show, because everybody is shouting at you.

And nobody's understandable. So, you know, there's a lot of that.

You know, I don't tend to get my news from conferences.

Someone told me today I should really go to the innovator sandbox and go look at the really young startups.

I didn't do that. I'm going to do that tomorrow.

So I'm told that's where the interesting ideas are. So I give you that as a piece of advice.

But it's really hard to find new things here because there are so many things and you quickly get overwhelmed.

This is a field full of vulnerabilities, new things.

You said that. What are the things in this field that still excite you of loving doing this, finding the things, doing your work, really?

What does still drive you? I mean, all of it. Right. I mean, I've always liked the adversarial nature of what we do.

You know, unlike, you know, sort of any other field in computer science, it's like, I don't know, word processors or networking.

Nobody's actively trying to make sure that you fail. And that attack versus defense makes it really interesting.

There's a lot of interesting thinking about AI just because that's the new buzzword.

I mean, a lot of it's marketing bullshit, but there is real stuff there.

I've always liked all of it. So I don't have a particular thing.

It's good to be aware of what's going on, new things, AI.

Is that all bullshit or maybe a part is not? Focusing on AI, what is your opinion in terms of cybersecurity, in terms of what's coming?

There's a lot of folks talking about it.

I think sometimes there's things we don't know simply of what is going to happen.

But what, in what way do you see AI as a opportunity for cybersecurity, but also a liability, a problem?

I mean, opportunity is easy. I mean, AI are just a bunch of techniques, a lot of definitions.

It's a bunch of technologies that do the work of human cognition.

And we could range it from simple technologies.

A thermostat is a very, very simple AI that turns on the heat based on the temperature.

We can have very complex things that automatically drive your car.

In computer security, we have problems with not enough humans and too slow humans.

And AI can help with both of those tasks. We have a lot of tasks that require paying attention to data, lots and lots of data.

AI can help with that.

We have problems with getting people to figure out how to respond to things. AI can help there.

So I see a lot of different technologies, which will aid in a bunch of areas on the defense.

Now, we don't know how soon, how well, right? I mean, there's a lot of noise here.

But think of it in terms of speed, scale, scope, and sophistication.

Right? The reason why AI is trading stocks is a big deal is because they're faster, right?

High speed trading. AI is playing chess and go.

They have more sophisticated strategies than humans, right? We don't like AI, you know, Facebook propaganda, because it can be done at a scale, right, that we haven't seen before.

And the scope of some of these large language models is just enormous.

So those four things, the things to look at is when those changes in degree become changes in kind.

And we don't know where those are going to be in our industry.

They will be there, right? AI is finding vulnerabilities might be completely different than humans finding vulnerabilities.

We don't really know yet. The field to explore to see what comes out of it.

I remember when Chattopadhyay started, a lot of news were saying, hey, prompt engineer is the new, the new profession.

And that lasted what, a month? Exactly. Right. It was less like a bit. And then, oh, that's not really a thing.

People will do their jobs. They just have a new tool to make it better, scale it better, make it faster.

And it's an assistant, right?

I mean, just like you might have a human intern, who's turning in mediocre work and you've got to, I mean, they're doing some tasks, not other tasks.

AI will be an assistant. And it'll be good at some things. It's really good at being a code assistant, right?

If you are a good programmer, having an AI assistant makes you a much better programmer, right?

It'll be good eventually at finding vulnerabilities in source code, right?

I mean, it's a task that is tailor-made for an AI, right?

Lots of data looking for patterns. And it'll be built into compilers.

It'll be part of incident response. It'll be the thing that is doing the data analysis.

And you, the human, are making like the human decision. So it'll be your partner there.

It'll be in a bunch of places. In terms of, and you worked in different sectors in different ways, but you were also thinking about things like courts or elections, very specific things like that.

What is your thinking really about those areas right now?

Even social media has an impact, AI has an impact, trust in institutions has an impact.

What is your view there? So this is something I've been thinking very broadly the past few years.

And my latest book is called A Hacker's Mind.

And it's about hacking social and political systems, right?

And then taking the ideas of hacking and applying them to other areas of our life, like hacking the tax code, which if you think about hacking, it maps pretty perfectly.

So I've been thinking a lot about democracy and about our systems of democracy, how they're being hacked, like ranging from things like the filibuster, which was a hack invention in ancient Rome.

It's a very old hack, right?

To modern tax cheats. Recently in Canada, there was a bill that got 20,000 amendments, probably written by AI.

But that was a hack of the system that has to deal with amendments and postpones a vote.

I don't know the Canadian system, so don't quote me on any of this, but stuff happened.

So I really see the systems of democracy that we have failing, like written in the mid 1700s for mid 1700s technology, failing in the here in the 21st century.

And these artificial agents being a part of that, like AIs writing laws, AIs finding loopholes, AIs adjudicating disputes, or acting as a campaign advisor.

You know, I've done some writing on this.

Some of this will be good, some of this will be bad. You can imagine AI auditing everybody's tax return, like everybody's tax return.

Again, nothing a human can do.

We don't have enough humans. That's true. Right? So that seems like an unambiguous good.

AI is helping people navigate bureaucracy. AI is helping bureaucracy, auditing every government contract, like I'm making this stuff up.

But there's potential for these systems to operate at these speed, scale, scope, and sophistication in ways that help democracy, also in ways that hurt democracy.

But you know, we're going to get some of each. There's some talk also, misinformation, trust in institutions, and their journalism, for them, folks get more information, young folks get more information from TikTok than they do from journal, typical journalism, old school journalism.

Does that bring like a security concern in some way?

You know, I think that is the red herring of our time. It is certainly the moral panic.

And this whole kids get evasion from TikTok. There's not a lot of actual data to support that.

I mean, we all say it because we're old people, and TikTok scares us.

But AI is not going to change any of that.

It's because it's already so fricking bad. You don't need AI to make deep fakes.

Like stupid fakes are just as effective. And maybe the democratization of it will actually do good.

And that's one area where I think we're worried about that, because it's the thing that's in front of us, the thing we can most easily imagine.

But I don't see a whole lot of changes there. And the whole banning TikTok is complete craziness.

It's a very specific thing, right? The banning is not because of the misinformation, I think.

I'm right. It's an anti-China panic. And we do this a lot.

We conflate cybersecurity with trade disputes.

And this, I think, hurts the U.S. talking to allies.

Because when we say, don't buy Huawei, don't use TikTok, are we saying that because they're Chinese companies, we don't like them?

Are we saying that because it's an actual security threat?

How much of each? And there's a lot of mistrust, because I think the U.S.

uses one against the other in not ways that are clean.

Wars are going on. There's some divide more emphasized in the past few years.

What role do you think the U.S. has in cybersecurity realm in terms of leading the world, if you want, or making like a ship between West civilization and China and Russia?

What role do you think the U .S. has in that regard? You know, I don't think the arms race framing serves anybody.

No, it serves the U.S. tech companies.

But I don't think it's a useful framing or a valid framing. I want us, countries that are big enough and powerful enough to support more security standards.

Traditionally, the U.S.

has been in more in favor of surveillance because, you know, we felt that the NSA gave us an advantage that other countries didn't have.

So a less secure Internet served our purposes. That's changed in recent years. And I think that there's a shift even in the U.S.

government that we realize that security for everybody, even if it secures the bad guys, is good for us and good for the world.

But, you know, the E.U. is doing a lot more here. I mean, they are very much the regulatory superpower on the planet.

And while the U.S. does what, you know, the rich tech companies want, we are terrible at passing laws that the money doesn't like, Europe is more willing to piss the companies off.

So it's doing better.

And the interesting thing about a regulation in computers on the Internet is that a single regulation in a big enough market moves the planet.

GDPR appears in Europe in, what, 2018.

And we all benefit. I remember I was working for IBM at the time, and they said, we're going to implement GDPR worldwide because that is easier than figuring out who a European is.

Facebook did the same. Right. California passes an IOT security law that takes effect in 2000, bans default passwords.

No company's going to have two versions of their, you know, thermostat, drone, one for California, one for everybody else.

So even when you leave California, you will benefit from that law.

So, you know, I don't need the U.S. to move.

It could be the E.U. And we are seeing the U.A.I. Act. We're seeing the European Markets Act.

We're seeing a lot of really good E.U. regulation in cyberspace affecting security, affecting trust and fairness that has effects worldwide.

There's a lot of talk about there's one Internet. But, for example, China has its own.

More countries are doing regulation specific to their countries.

So are you scared there could be more Internets than one, in a sense, because of all the divide?

Yeah. That's been a fear for a couple of decades. SplInternet is the phrase that was used the last time we were all scared of this.

And it never seems to happen.

There are more advantages. Yeah. I mean, and countries like China has its great firewall.

Lots of countries censor. We have single points of failure in many countries.

But the benefit of using the protocols means you can buy the same equipment.

The Cold War is when, you know, we had our stuff and they had their stuff and they were different and incompatible and radio frequencies and everything was incompatible.

Now everyone uses the same stuff.

And I don't see that changing because the cost to you of withdrawing from the worldwide standards and doing it yourself is incredible.

And no one wants to pay that cost.

Even a China that could do it. Or Russia. Russia can't. Russia's too small.

You know, China could do it. Maybe the U.S., maybe the E.U. India can't even do it.

And that's really it. You have to be super large. But you think that's a good thing, right, in terms of making it broader, more global?

I mean, the single global Internet is to everyone's benefit.

And so I do think that is a good thing. There's also a lot of talk about privacy.

Security agents, they want to see your messages, encryption.

But there's also this discussion of, hey, security, privacy is really important.

People should have their privacy. Where do you stand there? This has been going on since, what, the 90s?

This is this is not a new debate. And it's cycles, right?

Right. We call the crypto wars. They happen in every 10 years or so.

And it's a security versus security debate. Right. There is security in keeping our messages, data, information private.

There's enormous security benefits.

I guess security risks and having your stuff out there. There's security benefits in the police being able to solve crimes.

And we have to balance those two.

In general, you know, I think we need to take a defense dominance strategy.

And as long as, right, there's a phone in the pocket of every, you know, head of state and elected official and CEO and nuclear power plant operator and police officer, we need to make those things as secure as possible.

And putting a backdoor in makes them less secure.

And even if that makes it easier to solve crimes, that results in less security for us all.

And that's been true for decades. And it's not probably not changing anytime soon.

If anything, it's even more stark because these systems are more critical.

I'm curious also in terms of this way of thinking about some security, about different topics.

And you will possibly know a lot about this, which is what is one of the things about cybersecurity, but also encryption, for example, that most people don't know, but they should actually.

I teach cryptography to students who did not take math as undergraduates, you know, and it's like all of it they should know.

But, you know, that's not that's not really true.

What we want is a world. You don't have to know anything about cybersecurity.

And what do you have to know about automobile maintenance? Nothing.

We need to know about aircraft safety. Nothing. Pharmaceuticals. Nothing. Right.

Plug and play. Right. But even plug and play. There are experts. There are standards.

There's government intervention to ensure that you can walk onto a plane without even thinking about it.

And we don't have that in cybersecurity. Do we expect you to know about patches and firewalls and and antivirus and not clicking on links and ransom and all these things?

And that's really a failure on us, not on the users.

I think it's a failure on policy because the market does not provide for these security things.

Right. In any of those industries. Right. Not just on computers, on planes, on cars, on pharmaceuticals.

It took government to mandate security and safety.

So I want a world where you don't have to be an expert in this to use the tools.

And, you know. We're a long way off, but I think we're moving at least conceptually in that direction as computers move into the real world, as they move into your refrigerator and your car and your toys and your everything.

You don't feel online, but you are online with the tools you're using. May that be.

And nobody is not online anymore. Exactly. Maybe your phone is online.

Your car is online. You try to buy a refrigerator that's not online. It's really hard.

Your thermostat. Right. Online is no longer a place you go. Right. I mean, and we're used to computers in that in that way.

There's a screen and a keyboard between us and the other side is the Internet.

It's not the way it works anymore.

Your car is just your car. And you don't have that same interface that you're used to.

So we can't think of it in the same way. It's not a place you go anymore.

Right. We are embedded in a world that is Internet connected. You do a lot. You have students.

You do a lot of talks to different folks. Possibly you think about this for you.

What would be the wish list for a better Internet for the future? Like a five year time, a better Internet would be this, this and that.

Like a wish list for.

Yeah, I don't know if I have those. Yeah, I'm terrible at these sorts of questions.

I just don't think that way. And if I had a wish list, I'd actually like think about it and spend like a week coming up with it.

I just keep in the back of my head because I'm not going to get it anyway.

I'm not I'm not I'm not prepared for the genie, I guess, is the path, you know, potentially is the better path.

I mean, we need more regulation.

I mean, the problem is we're letting the market figure this out.

And the market's terrible at collective action problems. That's what markets don't solve those.

So it's that weird ass, you know, libertarian Silicon Valley way of thinking that, you know, government just gets in the way.

Not thinking that laws provide the substrate on which the market operates.

And if we actually want innovation in this space, you need regulation.

I mean, go the RSA show floor.

There are hundreds of companies that nobody's going to buy because they don't need to.

And why spend the money? You want innovation in Internet security.

You actually need some good regulation. So what's missing in all of this is smart government involvement.

Do you think there a collaboration between government and companies for that is important?

You know, it's it's probably important.

Remember, whenever corporations think it's a good regulation, it means it's terrible regulation.

If you are not actively pissing off the company, you're not doing your job as a regulator.

The goal of the regulation is to limit what companies do, reduce their profits in order to make for a better world.

Right. You're not allowed to send five year olds up chimneys anymore.

I know you had your business model too fricking bad.

Get a new business model. It's immoral. Right. And that's the way you have to do it.

You can't sell pajamas that catch on fire. Sorry, it's going to be more expensive.

You don't like it. Get out of the pajamas business.

And we have to say the same things to computer companies. We don't. We're unwilling to.

Money and politics won't let us. But it is what's needed. You talk a lot about attacks, of course, cybersecurity.

I'm curious about the incentives of many people doing being into hackers and attackers.

What are still the incentives you think are there and how can we mitigate them?

Now, I mean, the incentives are, I mean, I'm gonna say nothing you don't know, like money.

Of course.

Yeah. Right. Fame. Sometimes there is government involved. Fun. OK. So, you know, so a paycheck.

Right. Yeah. It depends who the attackers are. Right. You know, annoying your sister.

That's a perfectly fine motivation for for a low grade cyber attack.

Sure. And, you know, and we know how to mitigate them. Mike, you put people in jail.

That often works. Right. You make it. You make a newspaper. Right.

So that and that's a response. You make it harder to do. Right. So that that mitigates it on the front end.

You, you know, make it so it is not socially acceptable.

Now, you know, it depends what the attack is. If I, you know, you invite me over and I steal your sweater, not calling the police.

She's not going to invite me over again.

Right. So so there are lots of ways we can we can deal with incentives if we want to.

And right now, you know, we're living in a world where the incentives in many places favor the attacker.

You mentioned right. Government attacks.

So here this is a legal attack. Right. So you're an employee of the NSA. You're an employee of the Russian military, the Chinese military, the the UK, you know, cyber attackers.

I forget the name of that. You're in some country. So in your country, you're following your law.

Probably do something illegal in the country you're attacking.

Now, you know, we'll sit here and say, well, the morals are different.

Right. The U.S. attackers are moral. Chinese ones are not. But that doesn't necessarily make a difference.

It's an attack. Right. How do how do we in the U.S.

try to demotivate Russian and Chinese hackers? We indict them. Now, we don't think we're going to ever arrest them.

But an actual indictment by the U.S. FBI means you're not traveling on vacation to Europe for the rest of your life.

And maybe that'll be a disincentive when someone else is picking what they want to do in their career in the Russian military.

I don't want to do that job. They'll just like, you know, I won't be able to vacation in Italy anymore.

And that's no fun.

You work with different organizations, Access Now and others, foundations and all that.

In what way do you see those types of organizations relevant in these types of situations?

Government shutdowns, elections, the cybersecurity.

So I've been involved with EFF, with EPIC, with Access Now, Verified Voting.

A lot of organizations that are trying to build, you know, better Internet.

And whether it's fighting government shutdowns around the world or fighting legislation to break encryption in the United States or pushing for robust privacy laws.

These are all organizations, I think, doing real work, trying to mold government policy to be better.

And I'm always proud to be part of those organizations.

They do great stuff. Do you think there's still a great path of them to do more?

There's an enormous amount to be done. Yes, we are. We are not even close to finish.

That is true. And it's like playing a giant game of whack-a-mole, right?

You're constantly fighting the same damn battles. Cybersecurity will never be resolved in a way, right?

You know, never is a long time. But cybersecurity is fundamentally about people, not about technology.

And people will always be people.

We're going to always have disputes. There are always going to be people who want to break the rules.

There are always going to be parasitical strategies in society.

So to the extent that cybersecurity mirrors human institutions and human social systems, there'll always be need for it.

Right. It's which founding father said, if all men were angels, no security would be necessary.

Right. People will never be angels. It's not our nature. And there's a lot of people.

So there's a lot of people. So if you're Five Sigma, like there's still a lot of you have to worry about.

Exactly. Last but not least, I'm curious on if you have any thoughts on post -quantum encryption, those kind of things.

Quantum is not here in terms of right now, in terms of applications that scare people or make a lot of promises.

But what do you think about that? I do think about it. I don't think it's anything near the apocalypse some predict.

I wrote a nice essay about it.

If you want to look it up, it's called Cryptography After the Aliens Land.

With my name, you'll find it. That's it. You know, we have post-quantum algorithms with the basically the math is ahead of the physics.

We're going to have post-quantum algorithms before we have working quantum computers, if we ever get them.

By getting a working quantum computer, there are a lot of hard engineering challenges.

And when I say hard, I don't know if it's land a person on this moon hard or land a person on the sun hard.

It could be that the engineering problems are insurmountable.

My guess is not, but they might be. But even so, making a quantum computer work is going to take a lot of engineering.

And we're doing much better.

NIST is doing great on post -quantum standards. We have some candidate standards.

There are attacks and there is new research. I think we're doing really well.

Symmetric key is easy. Double the key lengths. Grover's algorithm teaches us that.

So I think we're going to be OK. And even if we're not OK, if all public key breaks, there's a lot of really good non -public key systems we use.

The security in your cell phone is not public key based.

We have authentication systems that are not public key based.

We can work without public key. Your trust assumptions are different.

Your security is different. But it's not a disaster. So we'll be fine. What we need is to be agile.

Crypto agility is real important. Not just for quantum, but for sort of any cryptanalytic breakthrough.

Machine learning is old. AI is here in terms of mediatic space.

Do you see a new technology arising, not quantum specifically, but a new technology arising in the next few years that gives you hope or you think could be interesting?

I don't know. It's an interesting question. Every investor wants that answer.

And if we told them that old Russian invest in that space, but I don't have that answer.

I tend to think we tend not to know until it happens.

Like, you know, like blockchain showed up and like it's incredibly stupid. But so much money poured into it because it's going to be the next big thing.

And so much dumbness and so much hype.

And I think most the hype is gone. I thank heavens we're like done with that kind of nonsense.

But, you know, it's still around.

AI, there's a lot of hype. But there's some reality there. Unlike blockchain, there's no applications for blockchain.

Zero. I don't know what's next.

Let's see. Thank you so much, Bruce. It was a pleasure. And that's a wrap.

Thumbnail image for video "This Week in Net"

This Week in Net
Tune in for weekly updates on the latest news at Cloudflare and across the Internet. Check back regularly for updates. Also available as an audio podcast!
Watch more episodes