Cloudflare TV

This Week in NET: 50th Anniversary of the TCP Paper

Presented by Mark Nottingham, Marwan Fayed, Lucas Pardue
Originally aired on 

This is a special feature that celebrates the 50 years of the “TCP paper.” We asked a few questions to three of our team members who are experts in protocols about why the TCP protocol is important for what the Internet has become, its evolution over the years with new additions, and also its future.

Participating are Mark Nottingham, our Standards Lead, based in Australia; Marwan Fayed, a Systems Engineer from our Research team, usually in the UK but here in Lisbon, Portugal; and Lucas Pardue, a senior software engineer specializing in Internet and web protocols such as HTTP/2, HTTP/3, and QUIC — Lucas is Co-Chair within the IETF (Internet Engineering Task Force) QUIC (Quick UDP Internet Connections) Working Group.

Some context: In May 1974, the IEEE (Institute of Electrical and Electronics Engineers) Transactions on Communications scientific journal published “A Protocol for Packet Network Intercommunication.” Authored by Vint Cerf and Bob Kahn, that was the paper that described the Transmission Control Protocol (TCP) that supported the interconnection of multiple packet-switched networks into a network of networks. Split later into TCP and an Internet Protocol (IP), TCP and IP became core components of the Internet that DARPA launched operationally in 1983. The rest, as they say, is history.

Mentioned topics:

English
News

Transcript (Beta)

Could you ask a more broad question? Why is TCP important? TCP is fundamental to the Internet.

Relatively speaking, it's one of the few technologies that has been able to persist.

TCP is the part that most Internet software uses on a constant basis.

I suppose to understand why TCP matters, you have to go back to the beginning and understand the environment in which it was designed.

When we talk about Internet protocols, the core, what they call the narrow waste, or what everything kind of funnels through, is TCP IP, this pair of protocols.

There's been lots of things that have come and gone, or been overtaken by newer versions and updates, but TCP is a great example of something that's, what I like to think of, quite elastic.

If we think about the Internet, the real innovation of the Internet was packet switching.

So, or a more formal version of it is called statistical multiplexing. So if you think about when we send messages, historically before the Internet, you have a channel, you have a communication channel over the air, over a wire of some kind.

And you want to send multiple messages from multiple parties to multiple parties.

And then the question emerges, how do you share the communication channel, the communication medium?

And historically, the obvious ones are, either you use different frequencies, so you share the channel in space, or you split it up in time, so every communicating party gets a small slice of time in a bigger window of time.

Both of those, however, have a limited capacity, so the number of frequencies in a wavelength, a set of wavelengths, or the units of time that you're willing to parcel up and how.

A lot of people who have been around long enough will remember when they tried to make phone calls, especially international calls, before the Internet sort of exploded.

You could dial a number and you could get a very fast, busy signal.

So there's the normal, busy signal, eh, eh, but there was the fast one you would get on occasion, eh, eh, eh.

And what that meant was, there was no capacity available for your call.

And this, again, is probably either time division or frequency division multiplexing in some form.

So the Internet comes along, this new notion of data communication, and they decided to do what was called statistical multiplexing.

So a message is taken up and parceled up into little packets, little pieces, and then they are free to roam around to get to their destination.

And because at the time, they weren't associated with live communications back and forth, like a phone call, you could afford to make them wait in the network a little bit and buffer, okay?

But people communicate at different rates, at different times of day, and so the buffers would grow and shrink and grow and shrink.

What it meant was, you could get many more communications on a channel on a pass than you could if you just took a second and split it up into one-tenths of a second, for example.

But now a different kind of problem emerges around this time, which is, there are many different network technologies that are emerging, and so there's a need to have these networks communicate with each other.

And the second is, if you are splitting up a message in this way, and the packets can travel anywhere they travel, then there's an element of, what happens if two different packets take two different paths to the destination and arrive out of order?

Or, how do I know that a packet has reached its destination?

Or, what if I send too fast for the receiver to understand the message that I'm transmitting?

So, TCP accomplishes these, and later a little bit more.

It's worth pointing out at this moment that this 50 -year version of TCP does not include congestion control.

Congestion control actually comes some years later.

And also, this first version, the 50-year anniversary version, hasn't yet separated TCP from IP.

That comes a few years later as well.

Both of those ideas are embedded in this first version. And it sort of belies the fact that if you look at the RFC, so the specification at the IETF that's attached to this peer review piece of research work, it's actually called the Transmission Control Program.

It doesn't become formally the Transmission Control Protocol until a few years later when they separate IP.

But that is, this piece of work, it takes a lot of knowledge built up over the 10 to 20 years prior.

People who are trying to figure out what data communication looks like and packet switch networks.

And then with this understanding that they need to talk to each other, this is what the Transmission Control Program at the time did.

So it described the ideas of gateways.

Whatever you have behind a gateway, you're free to design your network technology.

But if you start to communicate on this sort of shared network that joins everybody's and there are certain formats you need to follow.

It defines the hosts, the endpoint parties as most of the complexity being in these hosts trying to figure out how fast or slow to send something so that the receiving party can absorb it, take it off the wire.

And so what makes it particularly special is the designers, Cerf and Khan, of course, having built on a bunch of knowledge that comes from the community at large on both sides of the Atlantic, they got the abstractions right.

And that's the critical component. By that, they sort of figured out what is the minimum set of things that we have to put in the network in order for all of the endpoints to communicate successfully and to share the network equally.

The fact that those abstractions have persisted until today, and quite frankly, even in the new version, so I'm sure people start thinking, how does TCP relate to QUIC and what's the future and so on, and suspect we're going to talk about those things.

But those abstractions persist, have persist and will continue to persist over time, which is strong evidence that they are the right ones and certainly a minimum set of some kind that's required for success.

It is ancient in terms of Internet. These technologies, data science, et cetera, are very, very young in the whole history of things, but relatively speaking, it's one of the few technologies that has been able to persist effectively.

There's been lots of things that have come and gone or been overtaken by newer versions and updates, but TCP is a great example of something that's like what I like to think of quite elastic.

It's been able to scale way beyond the initial design parameters.

The Internet back then was a very different place, constraints in terms of physical connectivity between places, the way that networks were architected, the number of people using the Internet, the competition or the kind of overselling, the overcapacity that helps build networks and provisioning of those things.

TCP was designed in a time before a lot of that. And what it's been able to do is expand to the Internet, as we see today, but also work on very small devices like IoT through to being able to be extended or modified to be very performant in networks like data centers that are very well connected, something people call short pipes.

And effectively, it's the same design, but you can go in and tweak parameters of how the interactions work.

Something like congestion control is a very complex topic, but the general idea is that you can have pluggable algorithms and those algorithms define how a sender sends data.

TCP is an acronym for Transmission Control Protocol. So it's really about that control.

And yeah, one of the benefits of TCP is that it's deployed on the Internet and the Internet still works.

There was a period of time where as more people came on and more networks became more congested.

There's this whole concept of congestion collapse that I won't go into, but it's a very real risk that basically, just like traffic jams can cause everything to come to a standstill, the Internet could.

And TCP was critical in helping to avoid that by providing congestion control or congestion avoidance, really.

Basically detecting when things are starting to slow down or when losses are occurring as an indicator that something's not going that well and maybe I should stop sending data faster than the network can consume it.

And over time, we've been able to evolve those algorithms.

Like I said, data centers is very specific, a controlled environment where maybe you control clients and servers and every parameter.

You can even get down into kind of embedding this into hardware offload and all of these sorts of optimizations, but the general group is Internet.

Things are much less controlled. So the ability to have a sender-driven congestion control algorithm that's not relying specifically on what a receiver does.

And that's one of the main benefits of all kinds of protocols is these interoperabilities in uncoordinated manners.

And yes, we've gone from very basic algorithms to ones that adapt more to the needs.

Like I said, the changing nature of the network, for example, is fixed line.

Everyone's at a big terminal sat down, maybe connected to a mainframe that's speaking within a local area network.

So now everyone has a more powerful computing device in their pocket that is connected 100% of the time, say either via Wi-Fi or via cellular networks.

And the properties, the characteristics of those physical paths is wildly different.

And it's a testament to TCP that it's been able to just deal with it and continue to effectively improve the speed and performance that it can operate at in line with increasing bandwidth and reduction in latency that we naturally are pushing towards to make more interesting use cases for the Internet, better interactivity, richer experiences, those kinds of things.

TCP is fundamental to the Internet.

When we talk about Internet protocols, the core, what they call the narrow waste or what everything kind of funnels through is TCP IP, this pair of protocol.

And TCP is the part that most Internet software uses on a constant basis.

Without it, we wouldn't have, or applications would have to recreate a bunch of different facilities that it provides.

And so it really enabled the Internet as we know it.

Now, the protocols that I specialize in all use TCP.

So HTTP is what I mostly specialize. And so I'm aware of TCP's evolution, but more as a user of it.

So when HTTP was first created, for example, it used TCP really, really badly.

Every image, every piece of HTML was downloaded over a separate TCP connection.

And so the history of HTTP is really the history of optimizing how it used TCP.

For many years, it was around, okay, let's do persistent connections.

Let's use that connection for more than one request. And then it was, let's do multiplexing when we did HTTP too, so that we were not having to even use more than one connection.

We could do it all over one TCP connection and really get the most out of TCP.

And it's really only in the last few years, we exceeded the ability of TCP.

And the decision was made, well, let's create a new protocol called QUIC that doesn't use TCP.

But it took many, many years to get to that point when we exceeded the capabilities of TCP.

So the web was utterly dependent on TCP for much of its lifetime.

I think as you track the evolution of this thing, I'm confident, even having interacted with some of the people who were around back then, you get the sense that even they had no idea that the Internet would grow into what it has become today.

Maybe they could have anticipated that the Internet would become what it was 10 or 15 years ago, where it was still, it was a means by which people got information, transmitted messages around.

But we could argue now that it is, I don't think anyone would argue that it's not critical infrastructure in many parts of the world and increasingly more of the world.

I would go a step further and argue that it is not just critical infrastructure, but it is the critical infrastructure on which a lot of other critical infrastructure relies.

So now it's no longer about making a phone call, sending an email, getting an information, registering for some service, or even getting your TV broadcast or your radio.

Now it's about the energy grid uses it for control, for alerting.

The water infrastructure, sewage infrastructure, all of these devices, when people are trying to monitor what's happening or control what's happening, the things that we rely on on a day-to-day basis, they have gradually been transitioning to data communication networks.

And so if the magic of the Internet is designed so that if pieces fail, there's always other pieces that can compensate and so on, which is, I think, one of the things that makes it pretty extraordinary.

But in order to convey the importance of it, I think one is no one really sees it or is aware of how it works, which is a sign of its success.

At the same time, virtually everything relies on it or probably will at some point in the future.

Luckily, I'm not that old. I wasn't there. But as I understand it, the politics around licensing and around use of the technology, TCP came from a culture that became the ITF culture of radical openness and sharing.

And I think that more than anything had impact on adoption. I think as always, as a technology sector matures, it becomes harder for new entrants to establish themselves.

You kind of reach an equilibrium of something being good enough, especially given the elasticity of TCP to adapt to changing needs.

We kind of think of this stuff in other segments as disruptors entering a market and trying to gain market share.

They need to provide some kind of unique selling point or fix a problem that isn't addressed by the incumbent.

And so I'd say TCP's greatest success or the way it was able to establish itself compared to maybe other options, I'll answer it in a way you don't want me to, but it just works and it's there and it's widely deployed.

There's various things that can be done with TCP that some people like to do.

So this is a transfer protocol. The little bits of information being exchanged, the packets or the TCP segments that they contain, a lot of that information is visible to anyone who wants to look at it.

And that provides them an opportunity to maybe understand how traffic flow is going backwards and forwards and maybe try and optimize it and tweak it for the things that they know about their network.

That sounds great in practice. The issue that TCP has though is that while it can benefit sometimes, it also means that a client and a server think that their interaction is to be unmodified or untainted through the network.

Can't guarantee that. There's no security or integrity or authenticity of those communications.

Those bits can be changed. There are some like checks, like a CRC check or whatever, but they're pretty weak and they're easy to effectively recalculate or recalculate.

And so the loss of a train of thought, effectively, this speaks to the concept of ossification.

So while TCP is wildly successful and you would have it supported in your devices and you might have a home gateway, for instance, with a firewall that can open up specific ports to allow you to run services yourself, et cetera.

There's a whole ecosystem built around understanding TCP, tooling, configuration, user interfaces, documentation.

And again, that's a testament to just how well TCP works in that lots of people wanna use it and describe how to use it well.

And so therefore, TCP won compared to other protocols that transport protocols.

So we'll be familiar with IP, which is a layer three protocol P is a layer four protocol.

What does it add on top of IP? It provides a port number, like I just talked about for identifying different kinds of services.

It provides reliability.

So easy, guaranteed reliability applications using TCP don't need to worry about a network packet loss.

They write it to TCP, TCP owns, delivering that data reliably to the other participant.

From that layer of abstraction, you can start to build a lot more interesting applications.

You don't always want that.

So another transport protocol that's widely deployed is UDP, a user datagram protocol, which can send things unreliably.

That's more popular for things like media streaming, low latency video, those kinds of things, because quite often you don't want to retransmit.

Loss detection retransmission takes time. You need to wait for the other side of the connection to tell you what was received or what wasn't received.

And that's talking a round trip from me to you, which could be milliseconds, tens of milliseconds.

Those kinds of things add up, especially when you might only want a delay from me to you of a few dozens of milliseconds.

And so effectively for any new entrant that might come in terms of transport protocols, they would need to provide features that neither TCP nor UDP can satisfy.

And in lots of ways, you could build your own TCP replacement on UDP because it effectively doesn't really do anything other than provide a port number.

And so I'm going with QUIC on it. That's what QUIC did. I'll come on to that in more detail in a moment, but there was an attempt in the IETF to create something different called SCTP.

I can't remember what the acronym means, stream control, transport protocol, something along those lines.

But SCTP added USPs that TCP couldn't provide.

TCP has one issue called header line blocking, which relates to loss detection and recovery.

And you have this whole long, reliable byte stream of things, which is great.

It's a feature. But if you receive this byte stream to have a gap in it, maybe at the start, even though the detail had been delivered and it sat in your computer waiting to be read, the TCP interface won't let you, generally speaking, access those bits.

And that's really annoying because it will add spurious delays from your receiver application, thinking maybe it took longer for the other side to send it, but actually they were waiting to retransmit.

And this is annoying for use cases like web routing or things that are sending lots of stuff.

If you're only pulling one thing at a time, these things don't really, they don't really matter.

But if you're doing multiplexing, say you have a webpage that's loading HTML and images and so on and so forth, all at the same time, set-up line blocking starts to cause issues at applications.

Like say, people want fast, interactive experiences that feel nice.

And any sort of delays or junkiness in that is quite disruptive, especially as our expectations keep rising.

And so TCP, there were proposals to address that and kind of avoid some of this set-up line blocking.

But SCTP was one that was adopted by the idea and went through the full standardization process.

And the main difference is that it added a concept of multiplexing at the trans layer.

So instead of just one reliable byte stream, you would have multiple streams, kind of like multiple TCP connections, but all under one umbrella of control and management.

And it was done and it worked.

There were implementations of it. It ran the whole standards process, but it failed in terms of getting deployed on the Internet because of the deployability aspects.

You would need to go and change everyone's home gateways and their network, the firewalls, everything that kind of the plumbing of the Internet that is already geared up for TCP and UDP just wasn't that interested in updating everything in order to support this new transport protocol.

So SCTP was kind of like a false start in terms of fixing TCP.

And subsequently, QUIC came along, which did something pretty much identical, different reliable streams as a transport layer feature, but running over UDP.

And UDP is allowed generally. And so, yeah, it addresses the deployability.

And so the authentication topic I mentioned, we're in an era now where we need to take security really seriously.

We need to protect the data flows, but also the metadata about, well, it's both of these critical information for privacy and confidentiality.

And so TCP kind of fixes the data flow aspect.

You can show TLFs in there, transport layer security, and it protects lots of information about everything.

It doesn't address these middle boxes out there that might be changing things and effectively preventing new exciting extensions to TCP on the broad Internet.

In contrast, QUIC provides almost full encryption of the wire image.

There's a few bits that are exposed to help root packets here and there.

But beyond that, it's always secure. It's always got integrity, protection, and authenticity, which kind of key pillars of ensuring your communications are secure.

But not just that, it adds performance optimizations in terms of TLS handshake, et cetera.

So yeah, that probably answers your question in the reverse way that you wanted, but the approach is kind of the truth of the matter.

I think when we talk about anticipating the future versus getting the abstractions right, we can look back and talk about what wasn't done or maybe what was intentionally omitted by not old enough to have been at the table.

So we could say things like security was never part of the original design, and be it that encryption or even authentication, those people generally knew how to do a lot, but they were hard to accomplish.

And one of the design principles is sort of, it's called the end-to-end principle, where you intentionally wanna put all of the services and the complexity at the end points.

And there are good reasons for that, not least of which it's very hard to change the network once it is deployed.

And it is very hard to anticipate what you got right and what you got wrong.

And so putting those two things together, it just makes sense to put things at the end points.

Different parties can implement different things and still use the network to communicate.

The original Internet had no notion of monetization in it, the value of a packet or the connection to which the packets belong.

I think what I find most interesting about the important events is that for all that the community got things right, I'm most enamored by one crucial piece that was not part of the original design.

And one day maybe I'll knock on a door, send a message and say, was it just that we didn't know it was coming or that we ignored it intentionally?

And if you believe all of the historical documents, it tends to point towards that no one really anticipated it.

And it's the notion of congestion.

So some years later, as I said, congestion control comes some years later.

And what happened was communications were starting to slow down. People couldn't get their data from one place to the other.

And it's because the amount of communication was growing more than the capacity of the network itself.

But I just said a few minutes ago, you can buffer packets and keep them and you can delay them a little bit and that's okay.

And still that wasn't enough. So packets were being dropped on the floor.

So congestion control comes along some years later.

And this is the notion that it's not just important what the receiver can receive.

So how fast the receiver can listen as it were, receive the transmission.

It's also important what the bottleneck can handle when it's being shared by however many parties, you don't know where they are or what they're trying to do.

So the sender needs to somehow detect that this has happened and then adjust its send rate.

And this is called congestion control. So TCP is a suite, the modern TCP is a suite today has three main components.

The first is reliability. So if I send a packet, I get a signal that it has been received.

I get flow control. So the receiver gets to tell the sender, yes, you're free to send or no, it's too fast, please back off.

And then the sender also has to detect if there is congestion somewhere on the path because then it needs to be civil, if you will, and scale back the rate at which it's send and then continue to increase gently.

This is the notion of you have some form of multiplicative increase.

So you increase, sorry, additive increase.

Don't wanna get that one wrong. Additive, so you increase at a gentle rate, but when you send something is wrong, you back off quickly.

And this is the cycle.

People who might be a little bit familiar with TCP will know this is sort of the sawtooth behavior.

It's not quite the sawtooth today. That having come later and being omitted from the original design, I find absolutely fascinating for all of the brilliant people working in many parts of the world.

That piece doesn't come until much later.

The choices made were quite clever in that when you create a transport protocol, you're making choices about what services you provide to application.

The Internet is a packet switch network. So it's unreliable.

There's no association between the packets except for the endpoints that they're associated with.

And so an application like web needs to get reliable streams of information.

It needs to get them in order to make sense of them so that your HTML page isn't jumbled up or something.

And so the choices that TCP made in terms of what services does it provide to applications were quite clever in that you can try to provide a lot of extra services if you want to around security and around other aspects, but it was decided to be quite minimal.

And there's a philosophy of end to end where services should be interposed at the end points of communication rather than by the network in the middle.

And TCP adheres to that philosophy.

And the benefit is we've seen the Internet being able to grow and scale and serve the entire planet well.

And it's only really recently that we've seen some limitations in the abstractions in those choices that TCP has made, especially when you're doing lots of different things on one connection and there's a loss in the network because there's maybe a bad connection somewhere or you're on a mobile network that causes certain problems where you have to wait for that loss to be recovered before you can get other data that's behind what was lost.

And that's a bit of a limitation in the abstraction that TCP provides. And that's one of the reasons we developed QUIC was to allow a connection to do multiple things and have loss only affect one part of what's going on.

But again, that is only, we've only needed that really.

We've only uncovered that need in the last five, 10 years of the lifetime of TCP.

And so it really is an achievement that it lasted that long.

And of course it's still used ubiquitously. It's not going away anytime soon.

So TCP is a fantastic protocol. It evolved in line with a changing Internet underneath it, improving bandwidth and lowering of latencies.

But like with all performance optimizations and tuning, you're kind of like on this exponential curve of diminishing returns based on the amount of effort you're putting in.

And TCP was kind of approaching that limit. We realized for highly complex use cases, such as web browsing, when you're trying to transfer multiple resources on a single connection, that a facet of TCP's reliable delivery was causing issues in certain kinds of networks.

This is a concept known as header-blind blocking.

So if you have some kind of packet loss, the reliable delivery guarantees mean that even if you have a whole stream of data, there's a small gap in that.

You're not able to read that data back out of the application. And generally speaking, it's not a huge problem.

If you're not that performance sensitive, you can just live with it.

But for certain performance-oriented protocols or use cases, it's kind of annoying.

And we're engineers, we're technologists, we like to think of ways to improve upon things.

But let's be clear, TCP is a massive success. It's not gonna go anywhere anytime soon.

It's shown time and again that it's been able to change and adapt and expand to everything effectively.

And I expect that will continue to happen.

To try and go back 50 years and explain to people why it's so important, we don't need to explain any of that stuff.

Ultimately, like anything, you can boil it down to, you ought to communicate between people.

Do you want the message to get there or not?

And do you want that message cause harm to others or not?

You can analogize this easily to like a courier delivery. You order something online, you want it to be delivered to you.

You don't wanna pay for it, and it does not arrive.

Like this is a kind of core thing that humans need and want when they communicate.

Not always, you don't always need everything. Sometimes it's fine to shout and maybe it gets there, maybe it doesn't.

Depends on the mode of communication.

But for what TCP was targeted to do, it was just very important.

It could have been done various ways, but it wasn't. Its design is what its design is.

I'm sure if they would have taken a slightly different axiom at the time, it would have been as successful as it was.

Who knows? Sometimes these things are hard to articulate unless you were there at the time and at the date.

I'm not sure if the paper authors ever envisaged 50 years later how widely deployed their work would have been.

It's hard to guess. It's like every seminal paper. People are doing this work because they feel like it's important themselves.

They're not doing it necessarily to change the world.

If those things are fantastic. But ultimately, it's just scratching a technical curiosity that we have quite often.

It's a huge collaborative project. There are so many people working on the Internet from so many different angles.

And now, there's a scientific component to that in terms of how you collaborate and how you make technical design choices.

There's also now a policy component to that where the people are deciding how it can serve humanity in the best way.

And melding those two together is what I think the community is now working through of what the best structure for governing that is.

And that's a very active debate. My belief and my hope is that people in the community would say it is a sign, not of a single moment in time, but a culmination of many different things that happened leading up to that event.

So I remember when I was in research training, somebody said to me, people, we have this notion that research and advancement is about making these giant leaps forward.

And the truth of the matter is, even the things that are perceived as giant leaps tend to be very incremental.

And this is how we make progress. Classic case that was given to me by a physicist friend of mine was the theory of relativity.

And Albert Einstein's name, of course, closely attached to it.

But there is a line of thinking that says the theory of relativity may have been inevitable because there were a few other things that had to be known.

And in a sense, there was no place else to go except for relativity.

And so all it took was for someone or something to happen to put those pieces together in just the right way.

And I think when we talk about TCP, and I guess IP and the congestion control that comes later, the TCP that we use today is not exactly the same as it was before, but it does follow those same abstractions, those same principles.

And the more we learn, and when the barriers that we hit, there is a slight change in the design in the internals that allows TCP to grow and scale and adapt to the network usage over time.

Things like QUIC, for example, many would say that it's vastly different from TCP.

And in many ways it is. There are things that are embedded in QUIC that are not embedded in TCP.

Explain what parts there on how he came about. Oh, this isn't it.

So QUIC almost came out of necessity. This notion that necessity is the parent of invention.

One of the challenges with TCP as it was evolving is that it exists in the kernel, it exists in the operating system.

And I suppose in some form, maybe other boxes along the path between the endpoints.

To change the operating system, to change those fundamental implementations is exceedingly difficult.

Partly because you have Windows and Mac OS and Linux and BSD and countless others, each that have their own network stack.

But also partly because there were boxes that were deployed, we know have been deployed all around the Internet, middle boxes in some form, and they would ossify on an expectation.

So there's a specification of the day or a dominant behavior at the time, and the middle box would often be designed to expect certain bits in certain places or certain patterns to watch for.

And so if you needed to do something different, which was well within specification of what's permitted, the middle box would hiccup.

Maybe it would completely break and crash.

So we got to this point where we knew how TCP needed to evolve, how transmission, how the transport layer needed to evolve, but every attempt at doing so failed because things were too hard to change.

QUIC then comes along.

It's actually built on the other major transport protocol, which is UDP, different from TCP because every packet in UDP doesn't care about the ones that came before or after.

There's no notion of reliability. It's just a way of addressing some data, and it takes a fire -and-forget approach.

If it gets to the destination, great.

If it doesn't, no one knows. And this is the fundamental difference between TCP and UDP.

So QUIC is built on UDP, and that's what it's meant for.

UDP is a building block. You can build all kinds of transport protocols on top of it.

And so far, the implementations that exist in the world are built in user space.

So they're very easy to change and modify. We could argue at some point in time, it may be that we decide to put QUIC or portions of QUIC into the kernel because there are performance implications there, but the community is now acutely aware that as soon as that happens, those pieces are going to be very hard to change.

And so you can imagine there's a reluctance to do so, understandably. But even in QUIC, you still have many of the same abstractions.

You have notions of flow control.

There's certainly congestion control. Same algorithms in part. So QUIC is in TCP.

QUIC is available in a lot of QUIC implementations. I know BBR emerges with QUIC.

It's a new type of congestion control, but it is transferable, right? So the two are independent, but the abstractions that TCP honed in on, the abstractions that have persisted in the last 50 years, those still exist in QUIC.

What we can do in QUIC that we can't do so well in TCP is use the connection in different ways.

Coming back to that original idea of multiplexing, you have a communication channel and before the advent of the Internet, you would either slice in time or use different frequencies in space.

And so now you split things up, but it's still TCP creates a connection abstraction.

So it's called a connection -oriented protocol, meaning the two endpoints are aware that they're communicating with each other.

But that communication doesn't care about what's happening on the channel, if you will.

So with HTTP and modern applications, they are bound by the ability of the TCP connection to transmit data and the way the TCP connection handles data.

So one of the really nice things that comes out of QUIC is it says, wait a second, we have a single communication channel now underneath with packets and so on doesn't matter, but now we need to figure out how to use that channel in a way that suits the application.

So now you get into streams in QUIC.

So you've taken this single channel and you've multiplexed it further in a way.

You're doing different things on the same channel in such a way that no one is penalized by any of the others.

What is coming for the future? Oh my goodness, mine is one mind among many and I would never be so daring as to even try to predict where things could go.

Like I said, I'm not sure anyone actually predicted the Internet would become a critical infrastructure for the world.

I think the abstractions having persisted for so long is an indication that they are going to continue to persist.

They are however designed for a network that is unaware of what's happening above.

So the network is designed to survive. And perform irrespective of what people are doing, what the applications might try to do and so on.

So the last 50 years, if anything, I think people will agree that the abstractions by and large are the right ones and will continue to persist.

There are people in the community who are thinking, what do the interfaces to those abstractions look like?

If there was something that's missing, is there ways that we can build them in safely without complicating the network itself?

And that is really interesting.

QUIC I think has been part of informing that process. I suppose the future really comes down to maybe one thing and it is the relationship between the network that is so critical to human society and human society understanding the critical nature of the network and that it works the way that it does for reasons that look nothing like everything that came before.

And the everything came before being many different networks emerged that were bound in space in some part of the world.

And then the networks could talk to each other along socio-political boundaries and these types of things.

And the Internet doesn't know about these things.

And to force the Internet into this space actually is a risk to the reliability and performance of the Internet itself.

And so that the future I think is going to be partly communicating the importance of the Internet, why it works that it does, but also understanding that there might be things the Internet doesn't do today and it needs to.

And that's, I think those are going to be the next set of challenges that we have to face.

Do you know, one of my favorite things about computer networking and systems in general is it is a beautiful and elegant intersection of of math and theory and engineering on the other side and the building, which of course has a lot of theory embedded in it.

TCP has a formal model attached to it. A finite state machine.

Pick up any textbook on computer networking, you'll find it. It's fundamental.

It's very easy to understand and to analyze. And to validate.

Congestion control that comes later has very rigorous analytical models attached to it.

So that you can understand its behavior given certain constraints and so on.

The network is not a toy. I think it is, it doesn't emerge as a consequence of a bunch of people thinking, oh, I just need to take some data, split it up, fire it off, reassemble it.

There is some hard engineering behind it. But at the same time, there is very rigorous theory, both on the analytics, on the algorithmics.

And so people can trust and know about its behavior reliably.

And even if the things that we can write down on paper don't manifest exactly the same way in practice, this is where the engineering picks up and there's this nice feedback loop between them.

We learn a little bit more by building a thing and then we figure out the theory.

BBR is in this stage now. People have been building BBR. Now we're trying to understand BBR, design formal models to describe it, figure out what it's capable of, what it isn't.

So the Internet is not a toy. It is genuinely built on sound principle and good engineering.

And so they should trust in it.

So it's clearly a science, very much a science-driven endeavor. It gets better over time, I think because it is built on rigorous foundations.

So a couple of years ago, I went to a talk by Vint Cerf in London, talking about technological stuff.

I can't quite remember, to be honest.

But it was vaguely in the terms of future technologies.

It wasn't specifically about networking. It wasn't specifically about the history of the Internet or anything.

Vint does a lot of those talks or has done in the past and people are familiar with them.

But I wanted to go along because, you know, I'd not had the pleasure to see Vint in person before and it was on my doorstep.

So I went along with a colleague and it was a great talk and we learned a lot.

And it was in a wonderful building. I think it was like the Royal Society building, which is famous for, in the UK, these Christmas lectures that happens.

So we got to sit in that room and it's all very special.

But the weather was terrible.

It was absolutely pouring down. And so at the end of the session, I kind of just wanted to get back home.

This was also like February, 2020. So just before COVID lockdowns and everything.

Anyway, the session ended and my friend or colleague wanted to hang around and just say hello.

And Vint was doing a bit of meet and greet with people.

So he wanted to do that. I was like, it's fine, I'll just wait for you.

But he pulled me along, made me come and say hello to Vint as well.

I just happened to drop that I was the co-chair of the QUIC working group, which is not something I would ever dream of doing myself because the guy had just done an hour to two hour long lecture and probably didn't want to talk day job stuff.

But anyway, he seemed very excited and said, that's great. I love QUIC. If we could have gone back in time and designed it that way, we probably would have.

QUIC has features that are desirable, especially in terms of it boarding and reaching, things that TCP can manage, but doesn't do so well.

However, it also requires so much memory that it wouldn't have ever been deployable at the time.

So good luck with QUIC, it's a great thing.

And I'm just, it was great to meet him. I suppose the future really comes down to maybe one thing, and it is the relationship between the network that is so critical to human society and human society understanding the critical nature of the network and that it works the way that it does for reasons that look nothing like everything that came before.

And the everything came before being many different networks emerged that were bound in space in some part of the world.

And then the networks grew top to each other along sociopolitical boundaries and these types of things.

And the Internet doesn't know about these things.

And to force the Internet into this space actually is a risk to the reliability and performance of the Internet itself.

And so that the future, I think, is going to be partly communicating the importance of the Internet, why it works that it does, but also understanding that there might be things the Internet doesn't do today and it needs to.

And that's, I think those are going to be the next set of challenges that we have to face.

Do you know, one of my favorite things about computer networking and systems in general is it is a beautiful and elegant intersection of math and theory and engineering on the other side.

And the building, which of course has a lot of theory embedded in it.

TCP has a formal model attached to it, a finite state machine.

Pick up any textbook on computer networking, you'll find it, it's fundamental.

It's very easy to understand and to analyze and to validate. Congestion control that comes later has very rigorous analytical models attached to it so that you can understand its behavior given certain constraints and so on.

The network is not a toy. I think it is, it doesn't emerge as a consequence of a bunch of people thinking, oh, I just need to take some data, split it up, fire it off, reassemble it.

There is some hard engineering behind it, but at the same time, there is very rigorous theory, both on the analytics, on the algorithmics.

And so people can trust and know about its behavior reliably.

And even if the things that we can write down on paper don't manifest exactly the same way in practice, this is where the engineering picks up and there's this nice feedback loop between them.

We learn a little bit more by building a thing and then we figure out the theory.

BBR is in this stage now. People have been building BBR. Now we're trying to understand BBR, design formal models to describe it, figure out what it's capable of, what it isn't.

So the Internet is not a toy. It is genuinely built on sound principle and good engineering.

And so they should trust in it.

So it's clearly a science network. Very much a science-driven endeavor. It gets better over time, I think because it is built on rigorous foundations.

I'm hopeful that we can find a way to work with regulators and policymakers around the world to make the Internet a better Internet and to serve humanity better.

And I think you talk to the technologists who create and maintain the Internet and all the different components that we think of as the Internet, the public Internet.

And there is a sense that they want it to serve humanity.

They want it to be a force for good in the world.

And there's an ever broader understanding that we can't just say technology is good, that it has policy impact, that we need to think through how it interacts with people.

But there's also a risk that there are regulations that can hurt the Internet and curtail its ability to do good, that can unintentionally fragment the Internet into a lot of different networks that don't talk to each other as well.

And every time you try to regulate something that has complex technology, there's a risk that you don't understand the technology.

And so we need to inform the regulators the impact of what they're doing, collaborate with them where it makes sense to make the Internet better.

I'm hopeful that that can happen and that we can do that in a reasonable fashion.

I think that it's still early days in terms of how we regulate the Internet.

For many years, there was this hands-off approach to Internet regulation from governments.

That's changing now and we need to adapt to that change.

Pretty much everything in human society is just building on the shoulders of the giant that came before us.

So I think we need to be eternally thankful and respectful to the people who came up with ideas and were able to see them through and understand their thought processes, the kind of contacts in the environment they were working in and the people I worked with.

We might not think of those names or faces day-to-day.

We might only know some of the most famous ones like Vint, but there are hundreds of thousands of engineers who go about quietly doing the job.

Writing a specification is one thing. I've written some myself.

It is hard work. Writing papers and research papers to show how these things work in limited conditions is also a challenge and it's exciting too.

But the real work and the real honors go to the people who are invisible a lot of the time, the people who went and maybe tried to deploy this thing or find some issues and fed them back and evolved the protocol.

I could say TCP is very elastic, but we need to be thankful and respectful to everyone who wanted to participate openly towards helping build a better Internet.

It's not just TCP, it's everything, how it all interrelates.

And we can't forget how important that is. That's why standards are important.

That's why continued participation and open and transparent process for defining standards that affects literally everyone.

People who don't even understand any technical underpinnings require access to the Internet in order for things to work, either directly themselves or indirectly by the people who are managing the services or running the company or the utility that they depend on every day.

So I'm sure everyone is thankful for the modern day luxuries that we all have, the opportunities that the Internet and TCP IP are providing or have provided, and the opportunities to continue to evolve the Internet towards even more new and exciting things in the future.

Thumbnail image for video "This Week in Net"

This Week in Net
Tune in for weekly updates on the latest news at Cloudflare and across the Internet. Check back regularly for updates. Also available as an audio podcast!
Watch more episodes