Cloudflare TV

Leveling up Web Performance with HTTP/3

Presented by Lucas Pardue
Originally aired on 

Join Lucas Pardue (QUIC Working Group Co-Chair and Cloudflare engineer) for a session on how HTTP/3 is supercharging web performance.

English

Transcript (Beta)

The web, the digital frontier. I've had a picture of bundles of HTTP requests as they flow through the Internet.

What do they look like? Eggs? Balloons? Party poppers?

I kept thinking of visualizations that I thought I might never see and then one day I came on to Cloudflare TV to present Leveling up Web Performance with HTTP3.

Hello everybody, I'm Lucas Pardue. I am an engineer at Cloudflare working on protocols such as QUIC, HTTP2, DLS and so forth and so on.

Big week this week, birthday week.

If you just carried on tuning from the last session, you've seen some exciting announcements and the folks talking about those things directly.

I had to miss that session myself which I'm gutted about because I was preparing for this one.

So yeah, exciting times. It's going to be loads of cool stuff that I'm really interested in waiting for but I want to talk about myself.

Why not? Without much focus on things, if you look at picture behind me, this is Tenby.

This is a nice little town in Wales that I just got back from holiday from.

So I've been away for the last two weeks.

That's why this program hasn't been online and I've been enjoying myself and relaxing and kind of preparing for what's ahead.

So this episode is just a little bit short compared to some of the hour-long ones we've had in the past.

This is a check-in really to say what's happened since I've been gone, what's going on in the world of QUIC or HB3 and just to talk about one optimization I played with as a side project while I was away.

So if anyone's new because they just kind of tuned in for the birthday week kickoff and are thinking what is QUIC?

What is this world of acronyms that you're talking about? A while back I wrote a blog post about the history of HB3 and this dovetails really nicely into a momentous occasion that happened while I was away.

So I just want to start off by saying I made a cake, birthday week, which is this wonderful cake, a layer cake, HB3.

And we can look at these acronyms we have at the bottom, UDP datagrams, QUIC packets on the top with HB3 syntax and then kind of topping off with HB semantics with a strawberry on top.

We don't need to dig into this too much because I will just spend the next half an hour repeating myself on all the things I've talked about in the last few weeks.

But if you are new to QUIC, the important thing to realize is that it's a new transport protocol for the web.

It's secure always and it's multiplexed and it provides the potential for better performance when compared to traditional transports like TCP and TCP combined with TLS.

And because of some of the differences of how it works we need a slightly different mapping, so-called mapping, of HTTP onto it.

And that's pretty much all that HB3 is.

It provides the features of HTTP and HB2 just in a slightly different flavor.

So you can imagine this is the same sponge but maybe a different flavoring insight.

And so the important stuff that's happened in the meantime is, oh I'm not showing my screen.

You've missed the cake. Hey, I don't want to eat all the cake.

I apologize. Here we go. So at the bottom you can see UDP datagrams, QUIC packets, HB3 syntax, HTTPS on the top.

That's about all we need. And so the important thing that's happened over the course of the last few years really is that QUIC is being developed in the ITF as a standards thing.

Again, I'm probably repeating myself for people that know this, but we've been working over the years, really hard work from both contributors from different organizations, including many, many of my colleagues at Cloudflare looking at how do we even enable the reception of UDP datagrams at scale in a way that's secure and not going to overload things.

And other people from other companies either trialing this, implementing it, just giving feedback on, here's a specification, the paragraphs would be better reordered.

So over the course of the years, we've iterated on draft documents that are split into a family of transport protocol, HTTP mapping, which I've already mentioned, how to do integration with TLS, how to do loss detection and recovery as the Internet's a place where things go missing sometimes.

And so far we're up to now draft 31.

And I think previously we've talked about draft 29 as a solid basis for the protocol going forward.

Draft 29 was, if I've got my numbers correct, the first that went into what we call the working group last call.

So this is a position where editors of the documents, a group of people have been working diligently and very hard for the last few years, taking on board working group feedback and trying to provide a consistent style and design for the documents.

All of the way that you have to write a document benefits from being consistent in terms of temperature and so forth and so on.

But 29 was the first one where we addressed all of the open design issues that we had at the time and felt that this was a good kind of stake in the ground to say to people, look, if you are not able to track changes closely on a weekly basis, take this complete document set, read it and let us know.

We think that we're done, but we really encourage you now to take the opportunity to read the documents and give us some feedback.

From that, we had some new design issues, things in the protocol that people said, well, yeah, I've come back after a little break and I've tried to implement it this way and it doesn't quite work anymore.

I've got some questions, those kinds of things.

So having diversity of implementation is really important. Be able to tease out different kinds of issues that different people with different perspectives or different deployment environments might have.

So we worked through those and there's some other bits and pieces fitting in.

So over the course of, say, another month or two, we were able to release some more drafts and we did another working group last call, which is the second one to say again to the quick working group, we've addressed all of the stuff to this point in time over the years.

We had some more and we also addressed those. Do you believe that there's anything further to block us from progressing this draft?

And this is all to do with the standardization activity of how you go from a proposal that is completely outside the IETF to getting it adopted, to getting it towards eventually what we want is an RFC, which is a formal, say, stamp of approval that a description of a protocol has gone through the process and review and so forth.

So where we're at right now is that second working group last call completed and we're getting in a good position.

What's next is effectively outside the working group's realm.

So once the working group is happy, we need to submit it to our area director.

What this means is that as a transport protocol with these drafts, belong to an area within the IETF called the transport area, which is pretty easy.

There's different kinds of groups that work in different areas, but we don't need to know about them, right?

But we have been working over the years with our transport area director, Magnus, and Spencer and Miria before Magnus was in that role to really shape both the working group itself and the documents that it needs to produce.

So although they may have been keeping up to date over the years, either with their own kind of hat on of specific interest areas that they have, Magnus has contributed some stuff around ECN, for instance, and Miria's things around ECN or recovery.

This is the opportunity for the area director to do his review of these documents and make the decision of if he believes this is ready for forwarding on to what is called IETF last call.

So this is opening out those documents.

Why did IETF say we believe this thing is ready? We'd like you to review it and absent any objections or major changes to the protocol that need doing, we think that it's going to be ready.

We expect this last call to last maybe four weeks.

There's the formal announcements that happen for this kind of thing, so Magnus is going through the documents just making sure everything's, all the T's are crossed and the I's are dotted before putting it to the status for yes, it's ready to go forth.

But the transport document is now in last call, which is a really cool thing because we expect some things to be found as always with reviews.

Hopefully they're just editorial things that we can change. Design things can be a bit more problematic because they require changes from implementers too, but this is all good news.

This is all to be expected and it's really exciting.

I've only been, so in case people don't know me as well, I am a co-chair of the quick working group, but I only started in that role at the start of this year, so I can't take much credit for all of this.

I'm pretty much just explaining how I observe things from a distance and so I take very little credit for the success of or the efforts that this group have done so far.

It's a real team effort, so I just want to thank everyone that has been involved to get to this point.

We're not done yet, but this is a really big step. It's not every day that you get to design and deploy a new transport protocol for the Internet or for the web.

You can design them, but it can be quite tricky to deploy, which we talked about in previous weeks.

As we've mentioned UDP already and TCP, there's another protocol called SCTP that also went through this process, but it was its own transport protocol with its own identifier, made deploying that thing onto Internet equipment and services a little bit more tricky than it should have been and that affected the success of deployment of protocol.

We've talked about this in previous weeks that a year ago, the team announced general availability of HB3 on Cloudflare's edge, so in that time, we've been really excited to see other clients like Curl or Firefox and Chrome add their support and try out things for real with real websites that are reflective of how people want to use them.

This is an overview of what the quick adoption and standardization was that I talked about.

I produced this blog post just over a year ago now. I'm going to share the URL in case anyone's interested.

HB3 from root to tip. As part of that, I had great joy in going through all the document metadata and pulling out dates and documents and effectively drawing out a timeline of when documents branched off and where their roots came from.

You can see that QUIC goes all the way back to 2012.

At the time, 2019 was the most up-to -date, but we're beyond that now. We're moving on, so I have an activity task on myself to update this document.

It was a very manual process, so it's putting me off, but one weekend, I'll just make a big jar of coffee and go through and basically create the CSV that's needed to run this through a tool called GNU clad to update these further.

At some point, the RFC was made.

So, that's it in terms of the update. The other thing that I've been doing while I was away, kind of getting more into the weeds of while we're talking about standards development, think about how the documents themselves are actually written.

Authoring and publishing is a big factor in how an organisation operates. So, different groups, their method of working is a single Word document that's shared and edited, passing a token around and cutting new drafts that way.

It's generally easier if you're just a single organisation working on that.

As soon as you open yourself up to multiple participants, it can be quite hard to track changes and get review comments and address them.

We do have tools like Google Drive, for instance, that allows collaborative editing, but in terms of some things, that's not such a good way of working.

So, in my time doing ITF activities, the method of operation or development of standards in HTTP and QUIC has been based on a GitHub workflow, which might sound a bit odd, but it works pretty well in my experience.

So, I touched on this last time around, but the, get my words straight, looking at the QUIC working group repository and how the different documents that we have are just markdown files that sit there, and then we run them through a process of building and publishing.

The ITF runs its own website called the data tracker and complementary website for tools, and combined with those two things together is how we, say, get the document into a format, a canonical format to submit into that tool, and it generates PDFs and HTML, so if we wanted to read something like the HB2 spec, which I've got open here, I see 7540, it can generate this approved format for publishing from some canonical input, and that input isn't markdown, it's XML, which is fabulous, but I kind of missed or skipped the step of saying how we do that, how we go from the markdown to the things.

So, I'm just going to dig into an example, because this riffs on the workers stuff that I believe was talked about just in the earlier session, or that we blogged about earlier today.

So, got 14 minutes to go into things. At this point, if you have any questions, I'll double check.

Missed out.

No questions yet. So, I'm going to get technical. If you're not into watching code or paint dry, now's a good time to maybe go make yourself a cup of coffee and get ready for the rest of Cloudflare TV, but I'm going to switch from talking about Quick to talking about HTTP extensions, so you may be aware that I'm working on a draft called Priorities.

It kind of doesn't matter, but the quick working group has its own repository.

And basically, you can check that out and see the source code.

So, here's one I prepared earlier, which is this.

My computer name, Banana Slug, is a joke from an earlier episode, so I encourage you to watch that if you don't get it, and let me know if you find it funny.

We're just going to look within this folder.

It's fairly straightforward. You've got some contributing guidelines, some instructions in the readme.

If you visited HTTP extensions on GitHub, that would render, and you can do some more important stuff.

But what we have here is each of these documents that are active in the HTTP working group right now, as in Internet drafts that are going through the process of iterating and getting themselves ready for submission to become an RFC, or are in that process and awaiting final approvals.

They especially expect CT right now.

They're just files. So, you can see that there's HTML and text.

Those are the outputs from the markdown. If we just look very quickly at, say, the priorities draft, which is mine, and the zeros, there's nothing terribly complicated, just some text and some, say, markup here.

This is markup.

It's markdown. Funny. But, yeah. Links and some special syntax, maybe, to say, well, I want to reference another document, either an RFC, say, in this case, RFC 7540, and when we do this, we want that to actually become a hyperlinked document.

I don't have the priorities draft open right now to see, but if we compare back in our HP2, back here, where we're going through all the sections, the text is a bit small, so I'll increase that for you.

Throughout this text will be references to the HTTP documents or whatnot.

And so, here, a whole list of citations, in effect, impression, other RFCs, all listed out there, and if anyone has written a thesis or a document that contains bibliographies or appendices, sorry, citations like this, knows that it can be real tricky work and very time-consuming things.

So, the more tooling and automation you have for these, the better.

So, if I go back to the example of putting, for instance, this in, this is the priorities draft, and it's got an informative reference to HTTP2, which is to say this is input, sorry, something in the HTTP2 spec is of use in this document to help understanding, but there's nothing in that document that affects the behaviour in this one.

In comparison, we have a link to the HP3 Internet draft, by using this syntax, which is a normative reference, which just means, yeah, you should really go and read that thing if you want to understand how to implement the new priorities draft.

I could talk about priorities, but that's not the purpose.

What I want to say is, as you're editing these things, I want to clean up stuff.

So, once you've done editing some text, you just hit make. As most of these things go for me.

It's going to be a little slow, unfortunately. So, what the document is doing is just trying to find all of those little pressure marker and exclamation mark links to other documents, and resolve those references for you, in order to pull out the metadata that we saw in the HTTP2 spec.

So, get the authors for you.

It's going to come up with a link to the RFC editor, if it's an RFC, or some other body that hosts, I think there was an example here of, yeah, so, the DOI link for this thing, so that you can go and read it at your pleasure.

And so, what happens is, there's some requests that get made.

So, yes, the tool that does, works behind the scenes is something called cram down, something or other, specifically, that uses cram down itself.

Everything's based on our RFC, so it's a bit confusing.

But, yeah, this is xml2rfc.tools.ietf.org. So, in our priorities draft, when we wanted to go and fetch that informative reference, it's going to visit this URL, in effect, grab this XML metadata, that includes the title, includes the authors, the date that the document was published, then abstract, which isn't expanded into that references section, but maybe some other tool would want to use that for something.

And then we've got that too. And so, you can also see in the bottom, we've got some dev tools open.

So, I just want to highlight that, given the size of this document, or to the human eye, but it's small in terms of the amount of kilobytes that it is.

That request took 800, nearly 900 milliseconds to complete.

And when we're building the HPE extensions repo that contains all of those different drafts, it has to go and fetch a lot of RFCs or RFC metadata.

You can see now that the make process has started here. It's working its way through each of these, either an RFC or an Internet draft.

And this build time, it's not as long as it took me to build H2 load the other week, but it isn't that short.

And there's also some fun with caching here, because if you are developing an Internet draft, you might be working with others, linking to other Internet drafts that are also being created on quickly too.

So, you don't want to cache too aggressively, because you might end up pulling in some out-of-date data, which would be unfortunate.

So, the way that this tool is set up is to be quite aggressive on deleting or invalidating its local cache.

So, quite often, you might go away on holiday to Tenby for a week or two, and come back, and it wants to pull down hundreds of references to RFCs that haven't changed, you know, going back decades.

Maybe the metadata about them has changed format in some way, but fundamentally, the RFCs are immutable.

So, it's kind of a bit annoying and slow that you have to go and fetch all this data, especially when that data can take up to a second to fetch such a small file.

This isn't a criticism. This is just a, this is how the tools operate.

You could maybe look at improving this by implementing support for a better protocol than HB 1.1.

We're talking about HB 3 here.

So, maybe even 2 would allow you to do request multiplexing, rather than getting blocked on serial request and response.

You might enable a content distribution network to do more aggressive caching.

There's lots of, like, tricks you could apply, but this is the ITF's tools, and so it's on them to do what they'd like to do.

But what I was playing with in my spare time was seeing if I could speed things up.

So, I created an alternative to XML tools just to see if Firefly workers could speed things up by doing some caching in the local POP for me.

So, if I was to load this version of the same document from Firefly or a worker, I'm improving that load time.

This is not in, I'm disabling my local cache to prove that I'm fetching this each time I'm reloading the page, but in this case, the connection remains open.

This is a 50 milliseconds. That's probably not much more than my round-trip time.

I'm out in the shed today, but if I was to do the same for XML tools, okay, if it's reusing the connection, that's 188 milliseconds.

So, the differences there are quite small, but the important thing is when you're doing a lot of these requests, it just causes this make to take a long time, which is crappy.

So, fortunately, the tool has an environment variable override that I can use to force it to talk to my experimental Cloudflare worker to see if that improves anything.

Trying to find the right command. Yep, here we go. So, it's going to run exactly the same process.

It's going to go and fetch all of those resources that you saw, and it's just going to fetch them from a different place.

So, we'll let that kick off in the background. I've opened Sources Code just because it was fun for me to dog food or play with other capabilities of Cloudflare in my spare time that I might not normally get to try, and it's not rocket science, but by taking some of the developer docs and piecing things together, I was able to do some stuff.

So, a better description of this is on the GitHub repo at the moment, just hosted under my own thing, Edge Tools ITF in the URL there.

It's pretty simple. It substitutes a few of the ITF.

But otherwise, that's it, and the picture of Mr.

T, the meme. Hope you enjoy it. But really, all we're doing is handling a request, looking at the URL, see if it matches a pattern, some of the well-known URL things, effectively a substitute that's being hosted under my own domain, but it could literally be anything, and then rewriting the URL to go to the canonical origin to kind of fetch this stuff and see if it could use the cache API to either find it in the cache if it's there or fetch it and then push it into the cache for later usage.

This is all volatile, and so it's pop location. It's going to pull it into the local pop, so for a continuous integration environment that's running elsewhere in the world, even if I've sped up my build locally.

I did time this.

I did have proof that I was able to reduce these build times down by a minute, so if you don't believe me, I encourage you to test this yourself.

But, yeah, the one downside of doing it this way of cache on request is that because of the kind of cache validation lifetime stuff, it's possible that the AI running periodically is hitting a cold cache effect, so what I wanted was I created an issue on myself to say what would be nice is some ability to pre-cache those citation documents or libraries, blah blah blah is an example, and so I kind of asked the team here to say is there any way to schedule a worker, and they went, yeah, actually, this is like a new thing we're playing with, and it's not been announced yet, but it's going to be announced soon, which is today, and so you could play with it here, and so what I was able to do was create a new worker.

I have it as a cron trigger.

You can see here something triggering every 30 minutes, and, you know, all this does is handles a different kind of event, a scheduled event, and, yeah, there's some stuff here, but basically, I'm able to effectively step through each of the some range of RFCs, fetch them, and put them somewhere, and so the next step would be to link this into my other worker, speed up that initial page.

So, yeah, I hope that was interesting for you all. I have three seconds, so I'm going to say bye.

Happy birthday!

Thumbnail image for video "Leveling up Web Performance with HTTP/3"

Leveling up Web Performance with HTTP/3
Join Lucas Pardue and friends for in-depth explorations on using the latest web technologies to enhance performance and security!
Watch more episodes