Originally aired on May 16 @ 12:00 PM - 1:30 PM EDT
In this episode of This Week in NET, we sit down at Cloudflare’s Lisbon office in Portugal with one of the Internet’s original architects: Geoff Huston, Chief Scientist at APNIC (the Asia-Pacific Internet registry).
From helping build Australia’s first Internet backbone to now shaping global conversations on resilience, routing security, and cryptography, Geoff shares a rare, unfiltered view of how the Internet grew, and where it’s struggling.
We dive into the critical topics shaping the next decade: Internet resilience, RPKI, the evolution of DNS, QUIC, post-quantum cryptography, and the rise of AI-driven protocols like MCP.
We also ask Geoff: if he could redesign the Internet today, what would he change? And what’s on his wishlist for the future? Plus, a rapid-fire round of questions about the power of sharing in the corporate world.
Join us for a candid conversation with someone who has seen — and helped shape — the Internet as we know it, and who loves telling stories about how it all began — from silicon chips to the age of the Internet and AI.
Mentioned blog posts (in the intro):
What is the most positive thing on the Internet for you? Oh, I've never dreamt of this life.
I've never dreamt that I could live in Australia at the bottom of the South Pacific and be working on cutting -edge technology with people who are outstanding in their field and do so for decades and it's kind of wow, that is unbelievable.
The positive sides of this are just liberating in terms of individual aspirations and collective endeavour.
The downside is, oh my God, ads, oh my God, all this craft, but the upside is just totally uplifting and if you concentrate on that, the Internet truly is a wonderful place.
Hello everyone and welcome to This Week in NET.
It's May 2025 and this week we have a longer episode for good reason.
We're joined by a true pioneer, Geoff Huston. He played a key role in building the early Internet in Australia and today is the Chief Scientist at APNIC, the Asia-Pacific Internet Registry.
His sharp insights into how the Internet really works, behind the scenes, how it all came to be, are unmatched.
So, you're in for a treat if you like history, computing, learn how things work, protocols, there's a lot.
We go over Internet resilience, routing security, cryptography, post-quantum cryptography because quantum computers are coming and also, of course, AI.
So, if you like to learn more about how the Internet works, this is a good episode for you.
It will take some time, but a great episode for you.
Also, check out our blog. We've got some new posts, some that tie into this episode, actually.
For example, we have Forget IPs, Using Cryptography to Verify Bot and Agent Traffic.
That's a blog post that comes from our research team that proposes that bots use cryptography signatures so websites can verify their identity.
So, quite important in this day and age. Also, we have one about first-party tags in seconds.
So, it's all about Cloudflare's integration of Google's Tag Gateway for advertisers.
And there's also a technical deep dive called Quick Restarts, Slow Problems.
This one, actually, is about QUIC, the Modern Transport Protocol built on top of UDP, an old protocol.
So, the Internet is always evolving and improves speed, security and reliability for web traffic.
So, quite important, it's the foundation of ATTP tree.
So, the Internet is always evolving and this is part of that evolution.
So, that's actually something we're going to discuss today, QUIC, with Jeff.
Without further ado, let's jump right in. Here's my conversation with Jeff Hewson.
Hello and welcome to This Week in Net. Today is a special day because not only we're in our Lisbon office, but we have an Internet pioneer from Australia.
Hello, Jeff. Hi. It's really good to be here in Lisbon. I'm thoroughly enjoying it.
You're here this week in May because of the RIPE event, right?
Yes. I work for a weird mob, the Regional Internet Registries, and there are five of us around the globe.
Our job is to hand out and then carefully look after the registration of IP addresses.
A simple job, but it's right at the bottom of the Internet.
It's sort of at the core of everything else. And we have two meetings a year in each of these regions, which is a pretty heavy load.
And the reason why is, it's kind of, well, what are the rules?
How do we do this? There are none.
And so, we rely on the community itself, the folk who use it, the folk who deploy it, industry players to say, we need to be done this way.
It matches our business.
It matches what we want. So, we have these meetings twice a year to kind of work through those issues and have a fine dinner or two and discuss a whole bunch of Internet-related things as well.
So, it's a fun week. And you're also here in the Clothar office this time.
I want to go, if you allow me, a bit to the past, like the early days.
Those early days where you were seeing things happening, moving, potentially not knowing how big it would be in terms of the Internet.
How would you give us like a recap of those initial days, the thought process into...
I've got to turn things back to the 1980s, my age.
And at the time, the world was dominated by mainframe computers.
And interestingly, you bought the hardware and the software was kind of free, but the networking was tied to the vendor of the hardware.
So, IBM had its own networking product. Digital Equipment Corporation, if anyone remembers them, I love them, had their own DECnet networking protocol.
And sort of everyone had their own.
Now, if you think about it, you had to replace the mainframe, then everything else got thrown out.
The terminals, the printers, this, that, it was all tied to this protocol.
So, while you didn't pay for the software, you were kind of trapped by the software into a particular architecture.
The world was screaming out for a vendor independent way of hooking computers together.
We didn't want a universal operating system, although we ended up getting one in Unix.
But what we really wanted was just a way to be able to buy on the market from various vendors, printers, terminals, peripherals and anything else and not get trapped.
Now, there were two efforts out there that were kind of gathering pace. There was the telephone industry, Bayermoth, which promised us Nirvana next year.
And it was always next year. This thing called Open Systems Interconnect, that someone boasted it was six foot of paperware and not a single packet.
And they're probably right.
You know, it just was never coming. At the same time in America, there was this weird project that grew out of the work on Unix.
Unix was a weird invention of AT&T, the telephone company.
And because it was under a lot of pressure from the US Department of Justice, because it was just too big, too evil, too horrible, they said, that's fine, but you can't productise it.
You can't sell it.
The best you can do is give it away to universities. It was one of the first of these sort of open source software projects.
And the University of Berkeley got a contract from Defence, I think it was DARPA, to build a protocol stack of this emerging Internet protocol that ran on almost anything.
All of a sudden, you could buy an IBM, you could buy a piece of digital equipment.
We had them where I was working.
And you could run TCP IP and it just worked. So that was the interest behind the Internet protocol.
It was kind of because it was open and not owned by anyone, it became the thing of choice.
Because then, whatever you were using as hardware today, you could drop in a new one and it would still keep on working.
Everything else was there. So my experience in the 1980s was slaving away on a VAX 11780 trying to make TCP IP work.
I used code produced by my good friend Vince Fuller in the end to make that happen.
And I was one of many doing that kind of work.
But was it challenging in Australia specifically because of the distance?
How was that process like? There was never a hope at the time that we would ever see a live Internet connection from Australia to anywhere.
We were relying on rubbish over phone calls and it was hideously expensive and incredibly slow.
The idea of having a live connection even at a voice grade was outlandish.
Maybe in my lifetime, maybe not.
It was just too expensive. What changed? The US. And in a spirit that is not part of today's environment, there was a high performance computing initiative done by Al Gore.
Where he bought, I think six, at the time, 1988, state of the art honking great mainframes.
They had more power than this watch, just.
But as well as that, to get everyone else connected, they sponsored a national research infrastructure network so that researchers across the United States could get to those supercomputers.
But America was sending researchers into Europe, to Australia, to Japan, all over the world.
And it was kind of, well, what do we do?
And very quickly, they embarked on a second part of that program where they would help other countries connect to the Internet on an academic and research basis.
And so we were assisted by a program at the University of Hawaii in 1989.
And, you know, live connection. It was slow, it was satellite, it was, you know, one voice line.
God, it was slow. But it was live. And it just went from there.
Because as soon as you connect a community, an academic community, that really suffered from incredible isolation.
This small pool of folk at the bottom end of the South Pacific, you know, and say, well, okay, it's live.
Whatever you want. Just, you know, knock yourselves out.
And you can feel the magic in it, like, oh, this opens possibilities.
Oh, it was so much magic. The librarians were the first. But we had physicists.
We had all kinds of people who were at university. Because it wasn't that we were bringing them to the Internet.
They built their local area networks.
Their campuses were connected. Email was already around at the time? It was slow.
But we just came along in the background, in the basement, really, and just connected the Internet to their local network.
And because we were all running IP, there was no change.
But all of a sudden, things just worked really fast. You know, instead of email over two days, it was email, press send, they got it.
What happened?
And that was disconcerting. It was rattling. It's interesting to see the evolution.
Now people expect, like, milliseconds latency, very small latency at a time.
Even a few seconds was great, right? We are in a different world. And with the rise of CDNs, we're in an even more different world.
We get impatient when it's not instant.
And by instant, I mean under thousandths of a second. They kind of go, human studies, the human brain only reacts in a third of a second.
No, I think we're better than that.
And these days, we've trained ourselves to expect phenomenal performance that even then was just unachievable.
So there's been a dramatic change in that, yes.
I remember a few years ago, putting together an interview with John Graham-Cumming, that was our CDO, with folks from CERN.
The folks that actually, after Tim Berners-Lee started the web, they picked up on where he left off.
And they were explaining the importance in some moments in early 90s of the decision of should it be open source?
Should Europe close it in, in some situation?
And there was like a thought process in terms of how should we do it? And, for example, those engineers, those people running CERN and the project at the time were like, this should be open source.
This is important. Politicians, not as much.
Have you seen other situations where sometimes things were this way as we know it today because decisions?
We were amazed when a couple of us started going to the Internet Engineering Task Force in 89, I think was my first one.
The Americans were actually exposing their thinking, exposing their technology and saying, well, we don't quite know where to go next.
Come and help.
And it's this open invitation at the bottom end of the Internet to go. Sure, you know, I'll happily contribute into the discussion.
That idea of collaborative open source in technology was unheard of because coming from a vendor driven world.
IBM had secrets.
Digital equipment had secrets. They want to protect patents, everything.
We still have the patents and so on, but there was never open collaboration.
And I actually think the open collaboration produced a stunning quantum leap in technology and capability that no company could have done it.
It's kind of that level of creative thinking and forward thinking only really comes when everyone gets pressed by other people in the room.
But have you thought about, oh, yes, well, why don't I think about this?
And it leapfrogs each other in an amazing level.
The Web, as we know it today, was inconceivable even a year before that.
We were mucking around with command line interfaces and this horrible thing.
I think it was called Gopher. It was ridiculous. You know, search in ASCII. The whole idea that you could make an ecosystem around the richness of graphics, the richness of local computing with the Internet was just unheard of.
CERN didn't invent it out of nothing.
Like everyone else, they invented it from a collaborative pool of folks pushing each other as to how far you could take this.
I was doing a video work with some folks from the team about TCP, the TCP paper that was last year that did 50 years.
And they were telling me something that I find really interesting in the science area, which is TCP was built in the shoulder of giants, right?
Even Vint Cerf tells this. So they were continuing the work that was done by others.
And you can add your own layer to the piling, in a sense. There is a thing about the Internet which is, I suppose, at least in my generation, was revolutionary.
And even now it's difficult to explain. The technology is not in a book.
The technology is not frozen. And we're still lucky enough to be living at a time when most of the folk who were involved in the very early days are with us.
The technology is actually a conversation. There's no right. There's better.
And it's kind of even the latest, oh, TCP is now 50 years old. It's all over.
Wrong. In the last few years, we've experimented with different flow controls with a thing called BBR.
We've experimented with head-of-line blocking in this radically new protocol called QUIC.
And it's kind of TCP is by no means a solved problem.
We've taken a technology which, at the original stage, did kilobits per second.
It was slow. These days we're doing hundreds of gigabits and kind of wondering, because the fiber guys know no limit, how do we actually use all that fiber capacity in TCP sessions?
What do we need in the protocol? What do we need in the silicon on either side?
How do we make things fly even faster? And we're now experimenting with something that I think is, again, challengingly difficult.
Ultimately, when you can't go faster, you just do more of them and go in parallel.
It's like the road system.
You either have cars going at 1,000 miles an hour, dangerous, or you have four cars each doing 250 miles an hour, less dangerous, ever so slightly.
But you see what I mean? Parallelism gives you capacity and actually allows you to do big solutions more simply.
TCP is getting pushed into that space.
We've experimented a few times with multipath. QUIC gives us some experimentation.
AI data centers, they're experimenting around the same place. So in all of this, technology is a conversation.
It's alive. It's vibrant. You can't read the book and go, I know it.
On to the next. You know where we're thinking, but then it's a case of join the conversation and add to it yourself and see how far your ideas work with everyone else.
And that's what I admire about the IETF. And in little places like RIPE and APNIC and so on, the same kinds of conversations are going on.
Also looking at standards, how things evolve in a sense. I'm curious also, I was a journalist for several years, so I'm always looking for the Eureka moment, although I know there's no Eureka moment, but working in this area for so many years.
But was there a moment where you could see, oh, this is different, this will change everything?
You could have asked us in 1989, and we would have lied and said it's going to change the world.
But the next few years, it was in the academic and research community.
And at the time, we still had mainframes.
And some crazy bearded person, Frank Skolinski, stood up at an IETF meeting, and I think it was 1990, and said, we're going to run out of IP addresses.
And you're sitting there going, hang on a second. There's 32 bits in an address.
That's four billion. At the time, 1989, there were about half a million computers in the world of all sizes.
I must admit, they weren't in my pocket, and they weren't on my watch.
But it was kind of, you're serious, right? And you go, absolutely.
And I think the issue was that none of us truly appreciated the Eureka moment that really happened.
And I would actually say there were two. The first one was 1947 at Bell Labs, where a group of folk took the old thermionic valve with a huge amount of power and heat and so on, an electronic switch, and did it with semiconducting crystals.
It's kind of, oh my God. That was the first innovation. And when we started doing transistors, and it became a business.
Fairchild in the US dominated it for a long time.
But 10 years later, I don't remember them, but look it up, people.
Look it up. They put two transistors on one crystal. Two. But there's this law in computing.
I think it's Dijkstra's law. There's only three numbers.
There's zero. Can't do it. There's one. I did it once. That's amazing.
Or infinity. Because two is no different to three. No different. If you can do it in two, you can do it any number you like.
And all of a sudden, we started doing integrated circuits.
All we've been living through for the last 75 years, roughly, has been the progressive refinement of one idea.
One eureka moment that said, I can put a circuit on a silicon chip and do it not once, not a million times, but these days, a trillion gates.
It goes up and up. The processes are just mind-boggling.
The technology behind it. But the implication is weird. Because every two years or so, the chips are twice as powerful.
The cost is predominantly twice as cheap.
And every piece of equipment that you owned five years ago is perfectly functional but economically obsolete because my competitor can do the same thing for half the price.
And so we've been constantly challenging ourselves to think bigger, to do more because it's achievable.
And so you sit there today and say, why are you building this massive AI data centre with power supplies and blah, blah, blah, blah?
And the answer is because it's likely in five years' time it'll fit in one room, not in a data centre, because we're getting good at this.
And at that point, the dream is possible in a much smaller scale at a much cheaper rate.
So the folk at the moment are trying to sort of get there first, knowing that Moore's Law will make it cheaper, more available, more equitable in some ways.
And that genius of silicon chips has sustained all of our lives for 70 years.
So you point back to that. The rest of us are just living the result.
How do we get fibre running at the speeds we do? It's not we know better glass.
You can use 50-year-old glass fibre and it will still do the same things. It's the digital signal processes at either end.
The amount of... In fact, someone did a relationship between the size of the features on the chip and the amount of capacity you can get out of a piece of fibre.
And the answer is every time you halve the track width, you basically increase the fibre capacity twofold, again and again and again.
So we can do terabit fibre because of these astonishing digital signal processes.
And those enable everything that came next, like the ideas, email, networking.
All of those were enabled by that. All of that is enabled by that, because if you look at something and say it'll take a day, cost a fortune and so on, it's kind of, I'm not going to do that.
But if it's trite, if it's something that's incidental, you know, would I let my luggage tag have a chip in it?
Well, of course I do.
It's called a tracker for many companies now. Ten years ago, it's kind of, I'm not going to let that happen.
These days, you know, everyone's got a tag everywhere because the silicon industry has kind of gone, I can commoditise this to the point where it is ubiquitous.
And so the genius, I think, was actually the genius of silicon circuitry.
And we've yet to discover how far it goes. True, even in this AI era, that has a role.
Well, the issue is AI is just trying to recreate the biological neural networks, which are phenomenally complex.
You kind of go, that needs a trillion gates.
Well, I can do that on a chip. That needs, you know, an NVIDIA GPS processor.
Well, we got them. I need a room full of them. Today you do. But in five years' time, how will we manage to do that same amount of complexity?
Would that now be one chip? Right? This thing would have been an entire building of computing.
The US in the 1950s built their advanced aircraft warning system, SAGE.
SAGE was an entire set of building complexes. You know, these days it's a trivial application on almost any piece of computing hardware you like.
So today's AI is tomorrow's RISC device.
You were mentioning the chips. I was last year actually in San Francisco and I went to the Computing History Museum in Mountain View and they have actually the old chips, the first ones there, the most recent ones there.
You can actually see it. It's a fascinating museum and I'd encourage anyone who's in the area, it's right beside Google actually down there in Mountain View.
Go, go. And you can actually see, talking about Google, the initial servers they used, it was like you can buy in the store type of servers?
Look, yes.
And in some ways, even today, in a data center, it's the same technology that everyone else uses, it's just they've packed a few thousand of them together in the same location.
It's more parallelism than just raw grunt power. That raw grunt power is available in a laptop.
So there's no special secret source that they're using other than just brute force replication.
Is there, and you mentioned QUIC specifically, that is helping today.
QUIC was deeply insightful. But a question related to QUIC, was there things that you wish were built before for the Internet we have now or it was just a process, things evolve, no one knew how popular it would be and QUIC had to come at some point?
The issue around the Internet, which I think actually released that vibrancy, that the telephone companies were unable to do, was that previously the telephone companies that operated in most countries as national regulated monopolies and actually used a very constraining view of technology release.
Digital subscriber lines, DSL, the idea that actually you could do digital down a voice line was invented in the 50s but never, ever, ever released by AT&T.
Video calls were an oddity in the 50s but were technically possible.
But they saw themselves as a phone company. They invented Unix, but I'm a phone company.
So all of these things were sitting there but the phone company in its programmed release of technology never got round to it.
When we deregulated telephony, we thought we were getting these monopolies and bringing in a fresh air of competition.
We thought that it would give us purple phones, different coloured and sized phones.
It was all about competition in telephony. No. It was actually about releasing this industry from the constraints of command control and saying, do what you want.
If there's a customer to buy it, go and build it. And that released an industry that really has no regulatory constraint.
If customers want to pay for it, there's a supply side industry going, yeah, sure.
I want more, fine, I'll put fibre to your house.
I want 50 channels and I don't want a satellite thing on my roof.
Fine, I'll stream it down the net. You want it, I will do it. And that focus, that intense focus on actually what customers value has changed the economics of this entire industry.
We used to be obsessed by the packets, by the technology.
All the money is in the applications and services. This is a service mode industry now.
And the huge sums, the digital giants of this world are giants not because they have better packets.
They're giants because they're actually doing what people want to do, which is, I wish to work, I wish to be entertained, I wish to do it at speed and convenience, wherever I happen to be.
If you can meet that, people will value it and pay for it.
There's no plan. But the more, I suppose, our demands are met, being humans, I want more.
Or different. You did that, big tick, now let me do the challenge goals.
And that process has actually pushed us faster and faster down this track.
And interestingly, pushed the money up the protocol stack.
The money is not in networking. The value is in the services and apps that people do.
And that's why AI is almost an expression of being able to communicate services at a level of interaction, sight, sound, text, as humans themselves operate.
That's, I think, why it's so fascinating. Before going to AI specifically, you've been vocal specifically about Internet resilience, about routing security.
In a nutshell, how do you think those are specifically today?
What is the evolution there? Many concerns still with the ISPs, with different players in this field, right?
I think anyone who's not concerned is living a utopian dream world and I admire them for their lack of concern.
We're all deeply worried.
We're worried by the fact that it's really hard to build incredibly robust software systems.
It's really hard. And the scale of what we have to build, the expertise and capability of our tools, we don't build good software.
But this software is running everything in our lives.
It keeps the planes in the sky. It keeps the traffic flowing on the roads.
It keeps the water running. As we found here on the Iberian Peninsula, it keeps power going to sockets.
And, you know, it wasn't that there was a surge of solar.
There were software systems that met conditions it couldn't meet and it just shut down.
We're ruled by these systems and the resilience of those systems depends on the underlying quality of the tooling we build.
But every week, every major operating system provider comes out with a patch set because there are problems.
Because we're not all that good at this job.
And interestingly, and I think it's sort of fascinating, that those software folk have actually managed to make the risk management a public function.
So I put out a piece of software that sort of works most of the time, but if it goes wrong, oh, the taxpayers funding a computer emergency response team, they'll look after the mess.
Perhaps the software industry should take a bit more accountability for its errors.
It happened in the car industry in the 1950s when it was sort of pointed out that these cars were inherently unsafe and the car industry needed to improve.
I think this industry, particularly as a software industry, has to look very, very hard at itself and the issues of quality and resilience in the underlying product.
Then you move up into the resilience of the networking systems themselves and there is cause for concern.
Interestingly, there's not much security.
And in fact, I would liken it to, I'm going to a URL, a destination on the Internet.
I know what it looks like, www.Cloudflare.com, those letters.
Is that really Cloudflare? I don't know. How do you know if it's not? Oh, I set the right letters in and the right letters are on the screen.
Oh, and look, there's a padlock icon.
What does that mean? I don't know, it's a padlock icon.
That's security. And you sit there and go, good enough? Well, of course it's not good enough.
But trying to get something better that's convenient, resilient, scalable, these are big challenges.
And in some ways, a lot of folk listened to Mark Zuckerberg when he said move fast and break things and we took it way too far to heart.
Okay, we'll break things. It's okay, we're moving fast.
We need to sit back and pause, but it's hard to say what would make this better but as any complex society, the vulnerabilities in our reliance on these systems are scary.
And hats off to politicians who are trying to deal with it.
There are no clear answers out there. But I would actually say better accountability and better sort of, you release software that isn't up to scratch, maybe there's some more issues around here.
I happened to be coming through an airport where the passport system failed completely.
Thousands of people lined up. Why?
It seems it was a Sunday that was a good time to install a patch. Gee, thanks everyone.
And then there's a big problem. Well, yeah. A real world problem. Someone in IT, great.
Love your work. Why do we do this to ourselves? Why aren't we better at managing these systems and a little less cavalier?
And maybe it's kind of, well, we took 100 years to perfect telephony and it was pretty good by the time we pulled it apart.
Maybe it's just early days and we need better structures and controls in this digital world to stop such casual damage being sort of placed on our systems that we rely on.
In the complexity that exists in this area, do you have like potential solutions in what way, for example, things like RPKI or other things could improve the resilience?
And you mentioned security. Every day I hear some company, I wasn't expecting to be attacked.
Everyone can be attacked if they're on the Internet, right?
No one plans to be a victim and they're always surprised when they are one.
Absolutely. I did spend a lot of time doing security in the routing system and the routing system obviously needs it.
Why? Because the entire world's communication system is held up by rumors.
You tell me about what you know and I'll believe you and tell all my friends.
Is that the truth? I don't know.
You tell me and you're going, well, I made up the last bit. Oh, gee, that's a problem.
And that's the way routing works. Now, how do you know and can pick apart authenticity?
So in a lot of these systems, we rely on one technology and the one technology was actually invented firstly by the Brits in the late 40s and again with loud proclamations by the Americans 20 years later of public-private key cryptography that you get these two keys.
One key is used to create a secret that only the other key can unlock.
All of a sudden, if I get something that unlocks with the right key, I know it's you.
I know you used your key. You can't deny it because that's the only one that could have created this puzzle.
And we use that everywhere for web security, for routing security.
Whenever you use the word security, pretty typically, you're talking about cryptography.
But you don't talk about practices.
You don't talk about software quality. You don't talk about system construction.
It's as if cryptography is a silver bullet that solves everything.
And I think we're becoming a bit more mature to understand it's a good tool.
It's a very good tool. But it's a tool. It's not the solution. There's a lot more to this than just that.
We're all cliques. For example, I work with a radar team, Ming Wei from our radar team.
We look at route leaks. Those happen.
Those create havoc in some situations. Do you have things you think could be worthwhile exploring about the process, protocols, to avoid some of those issues?
Well, I think radar is a really good example of a practice that is not common in our industry of saying, we know things.
You can know them too. We're going to share what we know.
A number of companies, right up to the very biggest, go, yeah, we know things.
Well, great. Who else knows it? No, no, you weren't listening.
We know things. You don't. And it's kind of, how do we all get better if you're going to act like every single piece of data is a secret?
And I think the first thing is, like radar, you know what we know.
We're talking as equals at this point.
So if we're seeing issues, let's make sure we all understand, you know, how prevalent is it?
What's going on? How much is going on? And so to regard information as we do these days too often, I think, as some piece of empirically valuable stuff that is a well-guarded secret helps no one in the long run.
And so I applaud companies that actually go, look, it's open.
We're all in the same mess here. If we're going to make it better, we need to understand the environment we're working in.
And only then can we actually make solutions that effectively engage with that problem.
Routing security is kind of slightly on a side. It's annoying, but it's not the key.
The real issue, oddly enough, is the DNS, the name system. We broke the addressing plan years ago when we ran out of V4.
The addressing system is fractured.
I'm here in Lisbon. I have a new set of IP addresses. My phone goes, that's fine.
I don't care what's underneath. You're going to this website. The name is important.
The DNS name really matters. IP addresses, not so much. And so the name system is everything.
Everything. Well, how does the name system work? Oh, there's this group called the CAB Forum.
Certificates and CAs and browsers. Where are they headquartered?
Oh, in America. Who's in them? I don't know. Everything they say, my system trusts implicitly.
And they're never, ever allowed to make a mistake until they make one.
And as soon as they make one, it's invisible to us. We get misdirected.
This isn't good enough. It really isn't. The whole business of using, by now, I think, 30-year-old technology with X.509 certificates, these super-secret forums that make decisions on my behalf, machinery that has trust models with my information that I have no say in, seems to me to be incredibly irresponsible because when they stuff up, I'm the victim, and they're not liable.
That's not a decent deal.
I think we need to rethink, and we are in the IETF thinking, issues around how do we make the DNS itself better?
How do we introduce those cryptographic bits of pixie dust right into the DNS?
You mean privacy? Privacy and authenticity.
When I say you can reach my website using these credentials, you can test that it was really Jeff.
Test. You can prove to yourself that's genuine, and when you go there, it's me, and no one else masquerading as me.
The frustrating bit is we know how to do this.
We know how to do it at scale, but we just don't seem to want to go there.
How you push that is a challenge. I asked Olafur Gudmundsson, that was our DNS wizard and retired last year, that I had you, and if he has questions, and he asked me to ask you in terms of the work with the quad 1.1.1 data, our resolver, and APNIC Labs, what major trends did you saw there in terms of DNS usage over the past years?
There's a couple of trends that I think I can share right now, which are actually really quite fascinating.
We have been pushing like crazy to get folk to adopt security in the DNS, to adopt this technology called DNSSEC, and it's actually two parts to this.
You've actually got to produce DNS material that has these signatures, that says, I'm valid because of these embedded signatures, and interestingly, and opinions vary, but 12, 15% of domain names by some weird metric, seem to be signed.
Great. We also contest that if it's signed, will you, the user, believe it?
In other words, little thought test. Here's a domain name.
It's got a really bad signature. You shouldn't trust it. Do you still go there?
And right now, around about 65% of the world's users will. How do I know that?
Because at APNIC, we use this advertisement-based measurement system.
I think it's one of the largest on the planet that tests approximately 35 million new users every day, and we test them in all kinds of things, v6, et cetera, but we do do a DNSSEC test, and it's kind of, can you go to the bad place?
And if you can, you failed that test.
So what do I know about Cloudflare and 1.1.1 and DNSSEC?
Well, the interesting thing is 1.1.1.1 tells me what people are doing, not what people say they're doing, and it also tells me what's popular and what's not.
So if I look at Cloudflare and that data stream of queries, and I go, I don't care whether you were signed or not, but of all the queries that come in, who's validating?
Who is testing? And equally, how many of the names that Cloudflare is resolving come from signed domains?
And that's not 40%. It's not 15%. It's not even 1%.
Because a whole bunch of really big names, google.com, bankofamerica.com, sit there and go, oh, DNSSEC's a bit dicey, isn't it?
And they don't sign it. These really big names that you and I as users, and Cloudflare shows us that you and I as users, these are the most popular names that get resolved, it's kind of, oh, yeah, we're not lying.
Why not? Oh, because the cab forum told us that it's real.
Oh, is that the answer? We could do such a better job. In terms of security, you create...
That's the only thing you're relying on. The only thing. And time and time again, it gets broken, and we all go, oh, no, I've forgotten about that.
Let's move on.
Because we have no plan B. And the IETF has been steadily trying to actually do DNSSEC, do signatures and certificate structures inside the DNS to say, this is real, and you can test it.
No intermediaries, you can test it. And the answer is, oh, it's too slow.
I'm sorry, the fiber guys are doing terabit fiber. We're doing hundreds of megs to the house.
These things are fast. And you sit there and go, what do you mean slow?
I don't understand. Oh, 10 years ago it was slow. Yeah, true.
But why do we keep on insisting that the parts of the world that we don't want to change still live in the 1980s and somehow think that's for real?
To be super challenging back to Oliver, why are we still using UDP for the DNS?
Why don't we actually embrace that we can actually do a whole lot more if we move to QUIC?
We can actually do entire validation parts. Oddly enough, by doing big answers, we can actually make the DNS sing, dance, and be reliable and be authentic.
Why aren't we?
Oh, because UDP is important. It's kind of, it was, but it isn't. What can change that specifically in terms of industry?
People... Oddly enough, people like Cloudflare.
Oddly enough, it's actually, there are no rules. And I would actually argue there are no standards until you get it right.
And you only know if you've got it right by trying it.
And same with QUIC. QUIC was tried long before the IETF got around to doing it.
Same with the clever use of denial of existence in the DNS, which came from Cloudflare researchers, that compact denial of existence.
Was it a standard? No, we just did it and we observed that it worked. And everyone else is going, oh, let's make it a standard.
Yeah, cool. But, you know, do it, I think is the obvious answer.
And that sense of empowerment, it's always difficult to do it the first time, but once you've done it and it works, you should be in the habit of feeling enabled to do it, to actually experiment with, okay, we're going to push everyone onto QUIC.
What about the ones who can't? Well, we push everyone on there, they're there too.
And it's kind of, if the browsers do it, why aren't we doing it in the DNS?
And those kinds of thought experiments, oh, no, no, no, it's established business practice.
That's what keeps you in the 1980s.
That's what keeps your feet in the mud. You know, be bold. Push. I love history and Cloudflare's history.
Universal SSL, back in the day, more than 10 years ago.
Pushing it free to everyone was actually, like, consider one of the big moments of the company.
And it was a very small company at the time. Well, no, at the time, and it kind of shows the right ideas actually have traction.
And I would say, oh, well, we've had a lot of good ideas now.
Wrong. There's a world of problem.
There are better TCPs out there, if only we understood. You know, there are better versions of the way you integrate the DNS and its behavior.
We have transformed the Internet in the last 12 years.
The whole issue of how do you cope with a billion users all with really fast mobile phones and really fast networks?
Well, we changed the provisioning model.
Now we push a copy of everything you ever wanted to get to a data center within a mile of where you sit.
We don't move packets around the world.
That's someone else's job. We go to the local data center and download YouTube videos like crazy from two kilometers away.
Distance is cost.
Distance is a killer. The way we got over that was eliminating distance by pre -provisioning.
We have abundance in computing. It's cheap. It's plentiful. Use it.
Stop thinking, you know, oh, oh, that costs. Oh, we only need one copy of this. But in places like Australia, it makes all the difference, right?
Australia, New Zealand, but even places like here.
Focus sensitive to moving traffic 20 milliseconds across to the other side of Europe.
I want my data center right here in this town.
And that's perfectly reasonable. And it should happen that way because distance is the killer.
You mentioned already QUIC a few times. In terms of transport protocols and QUIC specifically, do you think that QUIC fulfilled the promise that DCP had in the beginning?
Is there any other protocol that could be at play and make things better for the future?
The interesting answer to that is there is, but I don't know it.
And part of the reason why is that for a long time, the Internet was regarded in research funding circles as an interesting funding area.
And funding agencies, particularly state-based ones, were actually quite interested in funding research groups to say, go and look at the way this stuff works.
See if you can make it go better. See if you can actually adapt it better.
And so there were a whole bunch of young kids doing their PhDs sort of pushing and prodding it the way this worked.
And that filtered out into the commercial world as different variants.
It works well on mobiles. It works well in very high-speed systems and so on.
But once you turn off that research funding tap, where do the good ideas come from?
And you go, oh, the corporate world will look after that.
Well, very few corporates have the luxury for a large and extensive research sort of facility.
The last big one I heard about was AT&T and their labs in New Jersey.
Even Google doesn't have an extensive set of labs. They're trying to make money.
And we rely on that underlying system, the academic and research environment, to challenge us and push us with new ideas.
But as soon as you say, oh, it's a solved problem, then you're kind of stuck with today, and today's going to be tomorrow, and that's forever and ever and ever until you get out of that thinking.
And so we desperately rely on novel approaches that challenge our thinking.
And yes, this is a feedback system. I'm trying to sense the behaviour of the network by the signals I get back.
Are there better ways of doing that? Not sure.
There are very different ways, very different ways, of actually understanding how do I adapt what I send to what I receive back as a result of that.
Hopefully, we see more innovation.
But that is predicated at kids are still learning about this stuff.
Researchers are still interested. There is activity in universities and so on in working through these problems to then seed what becomes product afterwards.
In terms of QUIC, there is also the implementation, being more implemented.
QUIC was a success story in a little while. It was done firstly inside Google, but to be perfectly frank, the thinking behind that and the group of researchers are well-known names from the US NSF days of the early 90s.
These were university researchers who are icons even today, Matt Mathis and so on of their time, Van Jacobsen.
These folks are good, but where are their equivalents now and where do we go from now is kind of the problem.
QUIC, once it got public, a whole heap of interest because this was a transport product.
How would you describe QUIC specifically, what it does?
Well, QUIC is the same magic as TCP. It takes an unreliable medium of packets and introduces reliability.
Oh, I'm missing a packet, I'll send another one.
It senses the order of the data that it's receiving. It allows the networks to be lossy, noisy and messy and the signaling protocol layered above that, TCP, corrects those problems.
But as well as that, TCP does something else.
How fast should TCP go? And the real answer is fast as you can as long as you don't crash and as long as you're not kind of annoying the neighbors.
If you're fair and efficient, there's no idle bandwidth on the table and you're not annoying everyone else.
And trying to get that equilibrium is what TCP is trying to do.
But TCP thought of things in a telephony world and by that I mean one person speaking, one person listening and it was kind of a conversation between two computers.
But these days, even if you look at things like the web, I don't have one conversation.
Oh, this object refers to other objects. I've got a CSS style page.
I've got this. I've got that. And all of a sudden, I'm trying to fetch 30 things from the same server.
Now, what do I do? Set up 30 TCP sessions? Well, you can.
Why don't I do it inside one? Well, that'd be interesting. Why don't I also note that some things are best viewed in computing terms as a remote procedure call?
No overhead, just here, do this. Give me an answer. And QUIC combines all that functionality into one bundled package.
Brilliant. And the second thing that it did, which, you know, sign of our times, applications and end systems no longer trust networks.
They just don't. There's a degree of hostility out there in the world that the network operators are busy doing packet filtering, traffic shaping, you know, all this kind of stuff.
And the applications are going, thanks, but no thanks.
And QUIC, the other part of what QUIC did is saying, right, I'm going to take all of the control signaling in TCP, which was normally not encrypted.
It wasn't part of TLS or anything else, and said, right, that's it.
It's all inside the encryption envelope.
You get to see nothing. Absolutely nothing. The application is saying, I know what I'm doing, and I don't want you to muck around with my signal.
You'd think it's slightly hostile, but at the same time, I think it's liberating in some sense.
The other part about QUIC is that previously an application designer would have to wait for Apple with their iOS platform or Android, you know, Google with their, to change the TCP driver.
You know, how long are you meant to wait?
What if it's not right? QUIC says, no, no, no, no, no, no. These machines are now fast enough.
I don't need to put the protocol in the kernel.
I can do it all in user space. All of it. All of a sudden, my app can do what it wants with traffic control.
There is no single TCP in a QUIC world. My QUIC might not be your QUIC.
You can't tell. It's all encrypted, but I'm in control of the way it works.
So if you have a bright idea and can implement it in a QUIC stack, go for it.
You know, you've got your own little ecosystem running. It brings control. It brings security.
It brings simplicity, in a sense. Everything we ever wanted at the application world and reduces external dependencies.
Again, everything applications ever wanted.
And there's less points of failure. Well, it ends up being, I think, less points of failure in the long run, but it's more reducing points of reliance, of critical paths, of waiting, reduces the pace of innovation.
You know, we keep on saying permissionless innovation on the Internet, but it's kind of nonsense, really, because if you're waiting for Apple to release a new version of their TCP driver, you could be waiting a very long time.
That's not innovation. That's just someone in the way.
QUIC kind of released that and got you out of that mode.
It lets you do what you want in your own software stack, and that was actually a really powerful feature.
There's a lot of talk about AI, but also post-quantum cryptography.
Quantum computers could be around in a few years. What are your thoughts on post-quantum cryptography?
Well, I think we're in a terribly bad challenge point at this point in time.
There is this world of secrecy, and the issue is, we think there are folk who are recording everything today, encrypted, but they're not going to try and decode it until they get a computer fast enough.
What that means is, we not only have to encrypt our data for the capabilities of processes around today, but we have to make a savage guess at how good they're going to be in 20 years.
20 years. If we're doubling every two years, that's 1 ,000 times faster, 1,000 times better.
How do I encrypt a secret that resists computers 20 years from today?
That's the challenge of cryptography post-quantum. Now, you kind of go, well, what needs that?
I'm going, well, every communication, if I've got a browser and I'm having a super-duper conversation with my friend, and I would like that to be a secret for 20 years because I'm the president of blah, then I need to use a really robust cryptography today.
For the future, even.
For 20 years into the future to keep it a secret, because I'm never going to re-encrypt it.
It needs to be that. The race is on in certain parts of this industry to create cryptography that works for 20 years' time.
And some of the browsers have already got post -cryptography suites to actually use for precisely that purpose.
It's infected things like the DNS because DNSSEC, do we go there?
Do we do it? And the kind of question is not can we do it, but what parts of the DNS need to be held a secret for 20 years?
The public DNS, let me add. The bits of the DNS that's always out there.
Well, is that a 20-year-long secret?
So some parts, I think, are feeling the pressure for post-quantum cryptography today, and rightly so.
And NIST is working really hard to try and generate some cryptography algorithms.
The problem is, cryptography isn't perfect. Cryptography says, here's a solution that is computationally not feasible to solve with current computers.
The objective now is this is a problem that is computationally infeasible to solve with computers that are built 20 years into the future.
50 years, who knows, right? That's the problem of the target you're trying to reach.
It's a very real issue, and NIST is, I think, doing a fine job, but it's big.
The U.S. agency. Yes, NIST, the National Institute of Standards. But they're expensive in terms of today's computing.
It takes time. It takes a lot of compute power.
It's not for everybody. It's not a commodity product today, this particular branch of cryptography.
Do you feel it's in a good place in terms of the current standard situation?
Secrets are our stock in trade, I think. Yes, I think it's in...
People are looking at it and spending money and doing the right thing to prepare for that kind of world.
Your bets on the viability of quantum computing, let's have a fair, fine debate because a bit like AI, only the future is going to tell us if this is just nonsense or if it's real.
Right now, trying to build single qubit computers that operate at 4 degrees Kelvin and so on, that's a bit esoteric, isn't it?
But then you think back and go, I'm going to build a chip fabrication plant that will build features that are 1 nanometer wide and I'm going to do it at a rate of thousands a day.
That's crazy talk. You can't do that. But we are. Makes sense.
It's quite interesting. We have our developer's platform and we could see on Twitter, everywhere, the excitement around AI, agents, MCP servers.
That's all things that are going on currently.
In what way do you think those new changes, those new use cases could reshape the way the Internet functions?
Could there be changes?
I think there are going to be changes but I also think that there's a lot more research that needs to be involved.
Most of AI so far is almost a bit of a trick.
The idea that I can, in silicon, replicate the richness of neural interconnectivity that occurs in brains like humans.
We're kind of going, yeah, we can do it with the same number of neurons.
We could possibly get close to the same number of interconnections.
What's the process that achieves in a brain what we call rational thought?
The AI folk go, I don't know. What do we do? We'll develop a few rules.
There are these tweaks on the parameters. Is that a science? Nah, it's an art form.
We're playing really hard. In some ways, the play has taken over the underlying issue of is this an inferential process?
This is an age-old AI debate. Is AI merely pattern imitative?
Or is there the effort to try and do analytical decision-making inside those systems?
The AI folk were always split between the two. Can this machine perform reasoning?
Or is it just blind pattern matching? Most of the results from the chat box and so on tend to believe it's blind pattern matching.
It's really good. It looks and sounds like me, like you, like anyone. It's really good.
But when you push it, it's not true. It makes glaringly simple mistakes because it's pattern matching, not inferential reasoning.
Will it take some time to get inferential?
I don't know. But if it ever gets there, be really, really worried.
A bit like Stephen Hawking was saying some years ago, that's the civilization-defining moment.
If those things ever get to do deep, reasoned inference, as well as really acute pattern matching, that would be a bit scary.
But also, it's where the industry is going.
You can see that AGI, there's a lot of talk on that.
And the evolution, it's crazy. I think this industry works on fads for the next big thing.
And when they don't quite turn out the way they want, they immediately switch their attention.
Dare I mention blockchain. And it was a fad, and it consumed a huge amount of power and effort.
But I think we've largely gone, it's not a fad.
It has a role, and it has a fine role in certain places, but it's not the answer to everything.
And similarly, I think, with AI right now, I would call it faddish.
And the reason why is, this is simple pattern matching with parameters.
It's not the other part of this, of deep inferential reasoning.
And it's that combination that's going to be, oh my God, scary. Now, there is some thought that if we pump enough money in now, whoever's in there first will gain the keys to the AI kingdom and dominate it.
Could be true. Could not. It's purely speculative.
Do you think the Internet protocols could change because of the current changes specifically, or would need to change in the future?
I've always argued, I think, that the Internet protocols were phenomenally minimalist.
And part of the reason why they were so successful is that they did as little as possible, and not even that well.
It was artful that the Internet protocols did not include routing.
It didn't include the domain name system. It didn't include a whole bunch of things.
It was merely just a packet formatting that was slightly better than the ethernet MAC encoding.
Better. But beyond that, it kind of wasn't.
And that minimalism is enduring because you can build anything on it. And the beauty about endurance is, I don't have to recreate everything from the bottom to the top.
I can start midway up at an IP level and build something further on, like QUIC.
I change as little as possible and keep constructing upwards. I don't need to go all the way back to figure out how to encode the photons on the fiber.
And so from that respect, is IP enduring? Yes. Surprisingly, ethernet encoding.
Who would have guessed out of all of these 70s technologies, ethernet was the one that was going to drive the world.
And no one, no one would have bet on that.
Yet everything is ethernet. Literally everything. And it's kind of, why change it?
Very interesting to see even old protocols, old ideas come to life after many years of not being in the prime.
No one cared about that. Well, no one cared while we're experimenting with alternatives.
But at some point, the conversation moves on.
And it's kind of, you assume stuff and never bother to re -question it because it's working.
And so I think, same as we assume MAC encoding on ether.
There's a lot of assumptions around IP. Interestingly, because of the fractured address space between V4 and V6, that's still an interesting point.
But we've moved beyond it a bit.
I actually think the most enduring part of the Internet, and this is bizarre, is actually the name system.
It's the DNS that is the glue of the Internet.
And I actually argue that symbolic system is actually the key enduring factor of the Internet rather than the protocol.
I always like to ask this.
If you have a wish list for the Internet for the next 10 years, what would that wish list be?
Things you really want to see more progress or new things being added?
Geez. The problem is that a wish list of 10 years ago would have been granted.
I've got fiber to my home. I've got hundreds of, you know, I've got stuff on my wrist that's a supercomputer.
So many things have already happened in a sense.
Oh, I think many things have happened, and it almost takes more creative types than me.
I'm a humble engineer in that respect to sort of think about what's missing in my life because I'm finding it hard to identify this kind of stuff.
We have gone an awfully long way in what we can do. And yeah, I think there's still markets and opportunities out there.
But to define it in terms of artifacts, that's hard.
But even in the routing security, in those domains... Oh, routing security, oh, it's a mess.
So a lot of improvements... No, no. I've said before, I think we actually need to do a whole bunch of very fundamental work in the domain name system to actually get over 1980s thinking.
I think if we did that, we wouldn't be so scared about routing security because it's the domain names that really matter, not the integrity of the underlying addressing plant.
You've got to pick your level where you want to work. And I would much rather seeing, if I'm going to have my wish list, DNS over QUIC everywhere.
If you get that done in five years for the planet, I personally will be a very happy person.
I have a few quick, fire round questions.
One actually comes from Olafur, which is, the strangest thing you've seen or observed on the Internet?
I'm a geek. I'm a bad geek.
And I was doing packet capture. You know, listening for packets. And I was doing it in V6.
And I'm kind of listening, right? And I see this ICMP, this error message saying destination unreachable.
So here's a packet that's been sent somewhere where there is no thing at that address.
It's not reachable. The source address of this ICMP packet was an unreachable address.
It's kind of, I came from nowhere.
Here's an error message that it's going from nowhere. Who the hell generated that kind of packet that kind of popped into material existence without a reason or a destination?
And I still find that packet an enigma. It's kind of, someone did it somehow.
And I wish I knew how. Like a magic pattern. A magic pattern that came from nowhere and is going nowhere.
And yet I captured it. Yay. Also as a follow up, what is the stupidest behavior you've seen in the Internet in all your years?
Oh.
Oh. Right now, so many. There is so many. When you don't get an answer when you're opening up a TCP session, what do you do?
Well you send the same opening packet again.
How long do you wait? A while. And again. And again. How long do you wait?
Well in some versions of Linux, you wait for three minutes. That's a geological age.
It's kind of, three minutes later, I've made a connection. Oh, you've gone away, haven't you?
And we're still doing that. It's in a whole bunch of stacks.
That's insane. But that's not the most insane one. The most insane one is a certain recursive resolver that is widely used today.
That when you give it a question where the answers are unresponsive, it just keeps on asking.
A day later, it keeps on asking.
It's kind of, when do you give up? It's not very smart.
Well it's just, it's an answer. I must solve it. Everyone else is gone, but I have this question I must solve.
I think, you know, that's about the dumbest thing I've ever seen.
It's kind of, when it's not working, cut loose, get out of here, because the user's long since gone.
Their attention span's seconds long. Emulate the user and move on.
I'm asking. I'm asking. I'm asking. Is anyone there? I'm asking.
Maybe AI can help. Yeah, right. Stop it. In one word, one word to describe the Internet today.
Everything. One protocol you wish more people appreciated?
Oh, appreciated. I'd have to say the DNS. Why? We broke the address space.
Routing doesn't really matter anymore. With this world of content, everything is name-based.
Everything is based around security, integrity, and distribution of names.
The fact that we've mishandled names and that we are really relying on a bunch of folk, hi, ICANN, who have decided to monetize names rather than tend names and think about the architecture of the names, saddens me deeply, because it's names, the symbolic system of the Internet, that is actually the enduring artifact.
The fact that you and I can talk about a referential object over this network, and we both know the same thing we're talking to, is the true magic of networks.
And that referential integrity relies on the DNS. And I see a whole bunch of folk who are just abusing the DNS as a plaything for either the dreams of wealth or just as a way to annoy everyone else.
That makes me sad. More dangerous, quantum computing or AI takeover?
I have...
AI. Quantum computing strikes me as voodoo physics. I just can't wrap my head around spooky action at a distance faster than the speed of light.
I'm sorry. That just doesn't...
I'm not there. Let's see if it plays out. IPv6 in one word? Increasingly irrelevant.
People are going to hit me around the head on that one. We've moved on from addressing integrity.
V4 and V6 were both 1980s architectures. We've moved beyond it.
We've lived so long in a world of address shortage. Addresses aren't critical.
And now we're living in a world that has both addresses out there. It's a dual-stack world.
Get over it and move on. I'm laughing because when I joined Cloudflare I started to write a blog post about IPv6, the adoption worldwide, different countries and all that.
And I was surprised with the bottleneck of trying to explain why it was so helpful these days and why it was not so adapted.
The sadness of V6. We need more addresses. Oh, we just need to put a bigger address field in the packet header.
Oh, it's 32 bits. What's bigger? 64 bits. Well, that's really big.
Oh, you're not dreaming big enough. And after 1994, let's make it 128 bits big, which is just big, big numbers, grains of sand, size of the earth.
And so we did this 128-bit architecture. What did we do then? We took the low-order 64 bits and said, no, we're not going to use them.
That's the interface identifier.
Oh. Okay, so how many bits have I got left? Oh, 64 bits. Do we use all 64?
Well, actually, no. End site prefixes are around 48 bits long. That's the equivalent of an IPv4 host address is now a site prefix, 48 bits long.
But I'm using NATs and timesharing and blah, blah, blah in v4. How many bits have I got to play with?
Oh, about 48. So let me understand this, because, you know, I'm a bit slow.
By the time we get to the end of the road, we've got so many computers, blah, blah, blah, that the NAT system in v4 is just hopelessly overloaded.
At that point, we'll turn to v6 and go, hi, and they go, no, we're full too.
Because there's no more bits in v6 the way we built it. It's human folly that we saw this vast expanse and put fences through it and got it down to the same size as v4 plus NAT to go, well, it's better.
And the answer is, no, it's not. Sad.
Interesting. What scares you the most about Internet's future? Oh, the whole cyber defense, cyber attack, the whole problem of software resiliency.
We are at risk every day and quite frankly, in a world that too readily swaps over into attack mode, our biggest point of vulnerability is this entire infrastructure.
We don't know how to build it well, right from the software, the systems, and everything.
And, you know, like anyone who works in those defense centers around this, this is just a nightmare on wheels.
Most underrated Internet figure?
Underrated Internet figure? Person? Yeah. Oh. I...
Underrated. Gee, that's a hard one, isn't it?
Thank you for that. I don't know how well folk know Van Jacobson, but if you say that TCP was the magic that made IP work, Van Jacobson was the man that truly spent his life working at that and explaining it in ways that were unbelievable.
I remember reading a paper where he'd likened TCP to the cistern of a toilet.
And it was a great paper, you know, it just did it well.
But yeah, Van, really clever guy, really cluey, and a phenomenal amount of work, but not well known outside of just that tiny circle.
I did actually a tribute to Dave Tatt the other day.
He improved latency, but... Yeah, he did. He did. But, you know, if we're doing a rating system...
And Van, he was never on the IAB, he was never sort of right up there in leading lights.
Dave Tatt was on the IAB for some time.
Now, Van just quietly did amazing work as a researcher. Awesome. If you could have dinner with one Internet pioneer, that or alive, who would it be?
Oh, obviously Vint.
Vint Cerf. I heard he has a fine wine collection. No, Vint is the most engaging, vibrant, curious person I've ever met.
If you're in a room where he's up there in the audience, A, he's in the front row, B, he's the first person asking questions.
He is so bright, so fast, so quick to understand the implications of things.
He's just had his 80th, I think, a year ago. Happy birthday, Vint.
And he's as bright as ever. Just absolutely spot on. One of the leading lights of the Internet, and deservedly so, if not the leading light.
I love hearing his talks.
Well, he's engaging. He really does bring it to life. Yes. If you have a few questions to him, what would that be?
Vint has been a strong supporter of my work within Google and the work of APNIC.
And at times, we have had questions.
Vint, can you make this go away? And he just does.
And so I reserve my questions to when I really need his help. And it's kind of, please, I need help.
But maybe a general audience type of question. Things that you think most people should actually know or even understand that he could actually explain.
You know, Vint was all about the engineering of the underlying systems.
And I suppose it's where I've been, too.
It's about how to make these things work and have enough curiosity and creativity to think beyond the box.
And, you know, I respect him incredibly for that kind of mode of thinking.
Then you go, well, what about the web?
What about this? And it's kind of, Vint's going, that wasn't my department.
That's at an upper level of the protocol up there. And, you know, I'm glad there are people like that working in this space.
The kind of questions that, you know, often spring to mind, would you have done it differently?
And the answer really was, I think, with Vint, that was, he just had the insight to make some phenomenally powerful cut-throughs to say, let's not make it complex, make it simple.
And I think that was a lone voice of reason in a world that was going, oh, this computer can do everything.
I'll just throw the lot at the problem.
Last but not least, and this is more a personal one, what do you feel about link rot, like the, even the history of the Internet 20 years ago, pages that are not available right now, software that doesn't work.
How do you feel about the history of the Internet things that we did, sites that were around, things like Internet Archive, how do you feel about that?
Once you make something cheap, we no longer value it.
When books used to cost a month's wages for one book, they were treasured, put in temples called libraries, lots of worker bees looking after them, indexing them, because they were valuable.
We've now devalued that, and yes, bit rot everything. We don't, there's no motivation to keep it.
What does that mean? If you think about knowledge as a cumulative process, we're destroying our immediate past by neglect, and it's kind of, I don't know if you've noticed there's a difference between the pre-search, you know, the world that is not Google searchable and the world that is, and even Google search itself, we are trashing that environment because Google search relied on folk really worrying about what they published, good web pages that link to others, decent research.
We see AI-generated craft with embedded ads as being the dominant ecosystem out there, and then we wonder why Google search gets confused, and the answer is, what else could we do?
We've trashed this space. How do we make it better?
The economist says, make it cost. It's too cheap. We are treating it with such neglect, it's not funny.
If I wasn't an economist, I'd simply say, think about what you do before you generate the next AI-generated piece of craft because it's just noise.
Are you concerned in any way that memory on the Internet will not endure?
That collective memory? Yeah. I'm pretty sure our view of history is actually the stories we told, not the reality that happened.
I think that's a fact of life that some folk will, over the period of a century or two, assume a role that was a bit fictitious.
We'll even invent some people, I'm sure.
The digital record probably doesn't exist because AI would have rewritten it.
Yeah, right. We're in the same kind of world, humans still being humans.
The most positive thing on the Internet for you? Oh, I'd never dreamt of this life.
I'd never dreamt that I could live in Australia, the bottom of the South Pacific, and be working on cutting-edge technology with people who are outstanding in their field and do so for decades.
It's kind of, wow, that is unbelievable.
The positive sides of this are just liberating in terms of individual aspirations and collective endeavour.
The downside is, oh my God, ads, oh my God, all this craft.
But the upside is just totally uplifting. And if you concentrate on that, the Internet truly is a wonderful place.
Is Wikipedia a trace of those good optimists?
Good and bad. There's no such thing as the wisdom of crowds, there's just crowds.
Some people in the crowd are brilliant and some people are just, oh, not so brilliant.
Wikipedia is a bit like that too. The researchers go, oh, poo, poo, poo, Wikipedia, and they're probably right.
But it has a role.
To be able to wander around with the world's collective knowledge in Wikipedia on my phone is unusual.
On the other hand, it makes pub debates over a beer really boring because someone whips out their phone and said, here's the answer.
I said, oh no, I was happy just debating it. Before we go, I need to ask this question.
You know Radar, Cloudflare Radar. What would you like to see there that is not there?
What would I like to see in Cloudflare Radar that is not there? Oh, I'm in for deeper and better analytics and better zoom capability in Radar.
I'd like to know the current query loads compared to yesterday and where it's trending.
I'd like to know the query types that are being used. I'd like to know when routing leaks occur.
I'd like to be able to zoom in and find out how they happen. I'd like to know where the RPKI and why the use would have stopped it.
But that's sort of my fascination with the DNS and with routing.
There's always more to do. Sure.
We also do the year in review, for example, for a look at the year, what you also like to see.
I do buying years in review as well. And I sit there and take a very tight look at routing and addressing for the same reason.
I think those kinds of retrospectives are valuable and really valuable to understand what were the salient features the last 12 months that are driving us forward.
The bits that, one thing I want, and maybe Cloudflare can do it.
There used to be a thing called food miles.
How long did the food travel to get to your plate, particularly in a restaurant?
How local is your food chain? And if, like in Hong Kong, all the food travelled halfway around the world to get there, that's kind of wasteful and kind of wrong in some ways.
How long has the packet travelled to get to me? What's the packet miles?
Because we're actually building all these local data centers. We're building and surrounding users with data on demand that were pre-provisioned.
How effective is it?
What are the packet miles of what we're delivering to users? The average in a year, for example.
The lower that is, the faster, the cheaper, the better the network.
You can get rid of distance. You've solved the problem of networks.
And we are solving it. I just like to see that metric. I think we can do it.
Miles to users. Thank you so much, Jeff. This was great. Thank you indeed. It's been a pleasure talking to you about it.
And that's a wrap.