Leveling up Web Performance with HTTP/3
Presented by: Lucas Pardue, Pat Meenan , Andy Davies
Originally aired on August 3, 2021 @ 1:00 PM - 2:00 PM EDT
This talk is about the new protocols QUIC and HTTP/3. It is aimed at web developers with basic familiarity with HTTP and its role in performance. It steps through HTTP evolution using a computer game theme for novelty and visualizations. Once some fundamentals are established, it looks at some tools/services for trying it out yourself.
Episode 8
English
Protocols
Performance
Transcript (Beta)
The web, the digital frontier. I kept thinking about bundles of HTTP requests flowing across the Internet.
What did they look like? Did they look like waterfalls or scan charts?
I kept dreaming of visualizations I thought I might never see and then one day somebody made web page test and somebody wrote something about web page test and all my dreams were fulfilled.
Welcome everyone to another episode of Leveling up Web Performance with HTTP 3.
I'm Lucas Pardue. I'm an engineer who works on the protocols team at Cloudflare working on stuff like TLS, HTTP 2, HTTP 3, etc.
Today's guests are here to entertain me on doing a web page test special.
So I have Pat Meenan who created web page test and has been working on the web or working to make the web faster for the last 20 years.
And I have Andy Davies who helps companies to make their sites faster.
He typically works with retailers and consumer facing businesses and he co-authored a book called Using Web Page Test.
So hello both. Thank you very much for joining me. I appreciate everyone's busy.
And yeah, welcome to the show. Uh Tell us about web page test in a nutshell.
Let's start with Pat. What can you tell us?
You know, I assume some of the audience here would have heard about this tool.
Actually no, I'm gonna pause you there because I've forgotten a crucial point.
For the last few weeks we've been focusing on well, I've introduced what the protocol is and then given some like time for people to describe tools like Wireshark or QViz or Qlog or just using browsers themselves to look into kind of what's happening with H2 and H3.
And I've got to apologize because I've literally made no mention of web page test.
It keeps being an item that I really want to get to and give due due credit and air time because it's really invaluable for certain kinds of workflows, especially when you've gone beyond just like a bare bones deployment and your system's working and you want to start to kind of get deeper into understanding different kinds of web pages and stuff like this.
So I hadn't got around to any good point for that and I reached out to Pat and Andy and said would you help me on this?
So with that very brief glimpse into what this tool is, I'll hand it to Pat to maybe explain a bit more.
Sure. Thanks for having us. Yeah, I love chatting and talking so I could do this all day.
Yeah, I mean so at its most basic core web page test was largely designed so that you give it a URL or a navigation flow for web content and it'll run it in a controlled environment that's hopefully as close to realistic and as close to the actual or an actual user's experience as possible and measure everything.
The goal is to try and measure everything it possibly can about the experience and then provide that information back to you to do whatever you want with.
Hopefully to make it easy to understand performance issues.
Is it performing fast? Is it performing slow? Why is it performing fast or slow?
And let you go as deep into the weeds as you'd like.
It is probably a little overwhelming when you first get to it if you're not used to it.
It's definitely like if you don't have a background in engineering for the web or you don't understand how the web is put together, when you first see a web page test result it's probably going to be somewhat overwhelming just because there's so much there and usually there's all sorts of rat holes you can go down to.
My favorite way to consume it is usually what we call the filmstrip view where you can see visually how the page was loading and below it you can see how it was delivered, the network request waterfall.
And when you put those two together, you usually have a really good starting point for understanding what did the, at least visually, what did the user experience and why and sort of go from there wherever you want to go.
But probably most fundamentally the thing is to try and do it as realistically as possible.
So real devices and real physical locations around the world with as close to real end user connectivity as possible with real browsers and just observing what they're doing to try and be faithful to the representation, I guess.
Yeah, and I think the realness, I don't know if that's a word, is a big factor there, right?
Because it's quite easy to kind of run stuff on localhost or like staging servers and machines on uncontrolled network environments and to run like very simple like, I mean, my first exposure to all of this was running load tests, right?
And just using something like curl in a loop or a tool called H2 load to understand performance in a completely different way than a user-oriented web performance and to say, well, just because all the resources loaded quickly doesn't necessarily translate to a good user experience because they might be blocking content happening.
And like you said, that filmstrip view is a really kind of good way, not even for maybe like engineers to understand it, but to visualize it to other stakeholders in web development and to say, yeah, this works great on a good device, on a good network, but I can kind of in these drop-downs select some different things and we can like show this off on what maybe a user in a different market is seeing and it's a bit not very good.
Yeah, I mean, look, the curl load test metaphor or example is actually a really good comparison point as to why it's using a real browser in a real location and measuring because so historically I was used to like in an ops team, we would monitor the crap out of our servers, right?
And we largely cared about the response times for the requests that we were generating and spitting out the requests as quickly as possible and no, it's up.
What are you talking about? But as we've introduced much more complexity to the web and there's so many third parties and so many services that are critical even to rendering just a single page where yeah, the stuff you deliver from your data center is a tiny, tiny fraction of what it takes to actually get the page content in front of the user and if you're not actually paying attention to all of it, you could introduce outages if one of those third parties are having issues or huge performance issues and be completely blind to it if all you're doing is caring about the response times from your your infrastructure and so that that was actually a fairly large driver from when we originally built it.
Yeah Yeah, I think so the other thing I really love about WebPageTest apart from the film strip and that's typically where I start with everybody because anybody can understand a film strip.
You don't have to be technical, but the other great thing I love about WebPageTest is it's fantastic as a data generation tool.
So yeah, it's great having a waterfall but you know those who know how to interpret waterfalls can interpret but it's also if you want a clean net log or a clean TCP dump while a web page loads or you want a 10 meg JSON blob that describes everything that happened from a data point of view when the page load, you know, it's fantastic for for generating that data and you know, it's its ability to generate that data reliably and cleanly is why other people have been able to go on and build commercial products on top of it.
And I'm actually one of the the key points sort of building off of that is the ability to share the results and speak to the same data that you generated if you would and sort of the permanent result URL that's shareable that you can all sort of operate off of I'm still a little disappointed that we don't have an easy way to do that with like DevTools and traces that you capture locally, right?
It ends up being okay attach a trace to a bug or email this DevTools file around or screenshots and being able to collaborate off of a single result that like you mentioned is fairly Reliably reproducible.
I mean the web is the web. It's kind of crazy and hard to reproduce anything But you can at least all be talking off of the same script and all looking at the same data when you're talking to each other Yeah, so Firefox do have a way that you can share their print files.
I can't remember what it's called but it also has the next generation of DevTools UI in there so you can do things like There is a I want to say flame chart I can't remember the difference.
It's the one Brandon Gregg uses quite a lot where it sums up instead of showing you the Stalactites that say a normal profile shows you it sums up So if you've got a function that's run every hundred milliseconds throughout the page load It sums it up so you can see overall where your biggest chunks of time are going and the Firefox thing is quite interesting Oh nice Yeah, and and just to say like to to Pat's previous point like the the sharing thing is like a huge Thing for me too, not just like of an ops team that know how to parse these things or whatever but from a learning perspective of like You know I can I can just draw the Internet and see like some forum posts of what's happening Like why does this thing look weird and see kind of a collective?
Debugging happening and through that process i've picked up pointers and learned stuff.
Definitely But I think maybe like for people that aren't so familiar with web page test like compared to dev tools Like what is it?
Is it? Is it a a platform like how how do you run and deploy this thing? It's a service right or?
So, I mean It's all of these things, right? Webpagetest .org the public instance of web page test is a service that I run But I mean, it's also open source um I don't I will say like my one biggest request regret is I never added any form of tracking Uh to know when someone deployed a private instance, so I couldn't tell you how many there are out there.
I do know there are Well over a few dozen if not a few hundred people running private instances of web page test where They deploy it themselves.
They do all their testing and other than maybe a few questions when they're setting it up I never hear from them again um Yeah, but I I mean like so one of the big comparison points usually is web page test versus lighthouse or something like that or page speed insights and I tend to think of web page test less as of a checklist advice tool and more of a dev tools It's like a shared online version of dev tools.
Uh, it's kind of the way I look at it and There are a few grades like up at the top of web page test where it's a little judgy about things um, but for the most part I try to keep it as a data collection and presentation tool and let you Figure out where you want to go from there and sort of explore and maybe surface some of the key metrics to look at um, but think of it more as a version of dev tools where you can Like andy was saying reliably reproduce on a clean infrastructure Uh a trace but then dig into it and do what you want with it and consume it in various different tools Absolutely.
I just I'm just going to share my screen just just to give people a picture of things like we could probably spend uh hours upon hours of uh, oops, um On this like digging into a trace or something, but just to give people a visual reference, I think So this was just kind of interesting Well while we were talking I set up a test on the public instance of web page test just for blog .Cloudflare.com and you can see there's um kind of opinionated grades that pat Was mentioning just then um And oh, yeah, this is you know, some of the metrics we talked about Contentful paint speed index web vitals like really cool things Um, and then the waterfall view, um, which you know always starts with the url that we entered, right?
I think that's interesting If you were to test, um Cloudflare.com you'd see it starts with the ocsp requests So that's another thing Exposes these requests that are often hidden by dev tools under the hood so um Certificate stapling works or um, whatever ocsp stands for us Checking the checking the certificate is valid.
I'm gonna you're gonna run. I'm gonna check you on this andy I've got it in front of me You can do the https version Or you want to do the redirect too?
All right It's all right, we can we can kick this off in the background and carry on chatting.
Yeah, so some Browsers and their dev tools hide stuff from um Developers, you know, they hide, you know checking whether certificates are valid.
Um I've got what else they hide as well. Oh things like um when you get beacons fired off from Network error logging and stuff like that.
Some of that is hidden away and Web page test exposes it all And that's I mean that's an interesting perspective with chrome in particular, um, and I don't know it it's hard to to sort of wrap your mind around how dev tools in chrome works, but Dev tools in chrome is from the point of view of the renderer.
Um, Which is pulling from the other the browser process in particular, which is the one that actually does all of the Well now it's in a network service all of the actual networking for chrome And so it's it's like a front end view of your page loading even within chrome and to get to the interesting bits of what um What chrome's doing for example, you actually need to capture a net log um, which is the the version that the the network team uses when they're working on the network service within chrome where it does log like all of the socket activity and Like you were mentioning the ocsp validation checks all of that the renderer sees nothing All it knows is I issued this request and eventually I got a response, uh in from the networking service, but it doesn't see Everything that went into making that happen and like pre-connecting, uh connections and things like that Some of that information is plumbed through as like extra data Um, but it sort of rears its ugly head when you start looking at things like http push or http to push and request prioritization and things like that where You're not necessarily seeing what's really happening at the network view.
You're seeing the renderer's version of what it's pulling And so you can you may end up seeing like http to push What's really happening is the server is pushing down all of the responses immediately before it even sends down the html response in some cases But if you look at it from dev tools what you may end up seeing is Um, just the time at which dev tools actually asked the network layer for that data So you don't see it interfering with the html You just see it coming in at some point later and oh it was pushed so it was magically there already Yeah, and that's a real danger point is is not not having enough information is is annoying but having information that is potentially like Slightly incorrect and leading you to false.
Um pretenses about the performance of stuff.
Um is it's a bit bit fun um yeah, yeah, and I mean that that's sort of I'd probably call it one of my pet peeves, but like um the the traffic shaping, uh when I I I focused heavily on web page tests trying to keep things as realistic as possible And so like traffic shaping in dev tools versus traffic shaping and web page test web page test, um Tries to do it at a packet level.
Um to sequence things as if it was really a Five megabit connection or whatever and with whatever latency on a per packet level so that The the tcp layer or whatever protocol layer is actually doing its thing as if it was on a connection with those characteristics Whereas dev tools because of the way it's built Uh implements its traffic shaping between the renderer and the networking stack And so the networking stack within chrome is still fetching things as quickly as it possibly can Um, not letting any of the protocols do their thing and then it just paces the data that gets sent to the renderer and so things like prioritization reprioritization push, um congestion control Just about anything that you want to tune like at an hdb2 hdb3 level Dev tools will absolutely lie to you And so you want to stay as far away from like dev tools traffic shaping as possible and then Like lighthouse sort of takes that a step further Depending on what configuration you're running Uh, if you're running on page speed insights and you put in your url and you get the lighthouse results It actually runs it at full speed on You know server infrastructure and then it models What the page loaded and it tries to predict?
What the performance would have been had the connection been a 3g ish connection?
and it infers what your first contentful paint largest contentful paint probably would have been and so The further away you get away from reality I mean, there are good reasons that these things were done to be able to scale it in the case of dev tools Um, you don't need root access on the machines, um to be able to do packet shaping but you get further away from the actual truth and That's a that's something I have trouble reconciling and i'm a little worried about Us going too far in that direction and losing the visibility that we need to understand how things are actually working Absolutely, like the i'd say they're a good like first order approximation of like if if the page is this large Say one megabyte and my download speed is half then it's going to take twice as long to download like it's that's in my view It's that simple of a thing And it just helps having to do some back of the napkin maths on these things It's not going to really help model the weird interactions that can happen between things like i'm on this kind of network and it the the topology of that network means that the latency is Is asynchronous, um in this kind of view obviously upload and download are asynchronous But you know there could be buffer float in this this network segment And we always know that it takes the wrong routing decision.
Like that's where rum has to come in um, but at least like if you do control conditions, but realistic network modeling you can Create a better approximation from your baseline To to maybe how the network would behave and particularly like the way you put it around the focus on the network aspects although we're You know, we're talking we've got like the quick transport protocol and tcp you've got this this bit in the middle, which is h2 and h3, which is Neither transport nor nor ui.
It's this Networking layer or whatever you want to call it.
It's kind of weird and I think you know, um Lack of tooling really in those things Even if you design the protocol with all the best intent and can show it can work quite well like, um speedy did to help um the design of h2 Um as as things, you know the deployment onto websites and the real Internet becomes more wide and more diverse It's hard to validate that what you've done is the correct thing to do Um, or that it works in all different kinds of Internets and so, um, I think Uh that both of you have some experience with this particularly on the prioritization which we've already mentioned.
Um, so I think andy's famously cited in in so many sources for his his resources on how h2 prioritization is broken Uh, yeah, i'm i'm the noise whereas patrick did the real work and designed the tests um yeah, so I think so Um prioritization became a problem from h1 to h2 without us really being aware of it, I think we got to this position where um We went to a multiplex protocol, but most implementations of it were done by google and twitter and facebook people who are largely Have well-managed sites You know people who know what they're doing from a technical point of view and then We then got to the real world Of everybody who wasn't using their own.
Um Custom built server.
So, you know the people who are using nginx for example, um, which we'll come on to or um, the apache implementation was quite good or using is and stuff like this and that's sort of when we discovered how important prioritization was um And the fact that the spec said prioritization was optional uh meant Some people didn't do it because the spec say it was optional.
Some people didn't do it probably because they weren't even aware of his importance um, but one of the things we've Discovered is how important Prioritization is how important getting those assets down?
in the optimal order Whatever that is and pat did some Work, I think when he was at Cloudflare discovering the optimal order, but you know, we we we have this problem with h2 at the moment and my concern going into a h3 quick world is we're still saying the um Prioritization is optimal and you know, it's it's how do we learn we're going to probably simplify it which is Um what your spec suggests lucas to make it manageable and understandable but we still got to get people to go from a world of treating it as an optional extra to treating it as necessity because Browsers are making decisions that rely on prioritization working.
I think And we also have to figure out how we're going to validate deployments I mean one of the big issues with h2 prioritization is even if the server technically implemented prioritization Um the actual deployment and the tuning of the server and the networking stacks and everything else needs to work together in harmony for prioritization to actually work and be effective and I mean quick or h3 is potentially in a little better shape in that the The congestion control and the networking stack and everything else is All technically within the applications control at this point Um, but like you mentioned it's optional Um, and i'm not sure what the h3 spec itself is state of shipping.
I know prioritization part of h3 isn't locked down yet, but um, if h3 ships before then we we may have a little bit of a problem there, but um Like last I saw a lot of the h3 Uh congestion controls weren't using anything like bbr.
Most of them were using cubic or something like that which Historically has not worked.
Well, um with h2 for prioritization because you end up with buffer bloat issues And so i'm not sure how all of these things are going to fit together um To get to a point where we actually ship h3 and by default prioritization actually works as we'd want it to yeah, I I do see your points.
Um as an author of a spec on this, uh, I I Unfortunately, i'm in the position So the thing with hb is that you know, we can think of it just as client and server, right?
That's the easiest thing two binary applications running with a link between them But the reality is there's all these weird and wonderful deployments.
So, you know, um, The obvious one is a reverse proxy.
So something that's gatewaying. So when you when you sign up for the services of like, uh, A cdn such as Cloudflare.
We we don't hold that content on that server necessarily, especially in our case like the first Instance is the piece of software that I work on and we just don't have it there So we've got to ask somebody else for it.
And therefore the assumption like perfect ideal prioritization is Everything is there in full for us to be able to say Yes, like it's like turning up to a buffet and you can just pick whatever you want and it's as much as you want Um, but instead it's ringing in an order with a delivery company and hoping that they they'll deliver those things as you requested them Um, but you're using a protocol like we're not using h2 to a back end Um, we're using hb 1.1 or whatever something else even that doesn't carry the same prioritization Ability in any way so you're having to kind of run in a very short buffer period of things you can do and I think When you're writing specs, it's important to to write language that people can realistically implement without violating the spec even if like You say you must do prioritization because that's the best thing people will turn around and say yeah, I know but I can't I Literally, I cannot do that.
So I'll just ignore the requirement. Anyway, it doesn't there's no protocol police going around um, I mean effectively you two kind of wear by Naming and shaming, uh of people, uh, which is great.
I mean, I think actually people it's not so much a shame It's the the validity.
How can I how can I check what i'm doing is correct, you know, the the h2 prioritization stuff was about um expressing trees of of requests that related to each other and It could be hard to do and there's maybe like edge cases where you didn't quite understand the spec in a way Or you understood the spec but the way you implemented it Would be problematic especially as pat said, but you're kind of speaking to a tcp Api that isn't giving you all the information that you really need to understand things like oh i've changed my mind Can I take that back and you're like no it's it's it's not on the network.
It's not in the client It's it's still local, but you you can't change your mind now.
Sorry um So, yeah the if say there was better testing and visibility into this stuff at the time of development People would say well Yeah, you know, here's an ideal shape of a thing Um, I don't I don't meet that but actually i'll i'll explain why here are my constraints And here is a good reason and maybe you could do modeling as well, which is if you're a reverse proxy um, you know, there's there's a model where You're speaking to a cache behind you that doesn't have that or it does have that in memory Uh, the link might be congested these kinds of different things to help people understand what is just plain broken This is what was the best that we could do at the time.
Um I I don't know if we're there yet.
I I do I would so, you know working on this new spec for extensible priorities I'd really like some more test cases, but like the difficulty is in where you do that validation unless you have controlled server conditions then You you don't necessarily know what to check for at the client I mean the simplest thing i've done some of these I think I presented on the show before which is the spec says, uh If you request five things sequentially, they'll come back in the order that you requested them sequentially, you know first in first out Right, there we go.
We'll we'll do that one And you know robin marx who've had on the show before has has done this multiplexing analysis of servers even without the prioritization signal, right?
server has to pick An implementation choice when it receives things regardless of if there's a signal to tell it some extra stuff or not Um and robin and saw stuff like people doing last last in first out Um, which seems a yeah a bit odd.
Um That identified a bug and so that you know in in practice some people it's really easy to fix this um, once it gets to the point of large-scale deployments the technical solution might be easy, but the rollout and the Things around that can be tricky.
So Yeah, I I don't know like I don't even though the the scheme is simpler like right now.
We've got a very interesting discussion about How much how much guidance we give server implementers because yeah, the language is There's a signal right and that's what's missing from h3 now There's no signal, but we still say that you should pick some sensible strategy for the best thing that's most appropriate to use case Without defining what's most appropriate because that's the web.
That's somebody else's problem And that doesn't help people.
Um, but you know, there's a signal.
It's simple to process. Um Yeah, but you you don't have to do anything with that And I think I mean like you were saying in the in the middle box, uh reverse proxy kind of situation Um, depending on what you're optimizing for at the time when you're building the reverse proxy um, hopefully just awareness of how priorities matter and what reactions should look like will help as people are implementing but I mean, you know, let's say you have 50 requests that come in All with different priorities and you're making 50 back-end requests Uh the decisions about how many concurrent requests do you make at a time if you're talking htp 1.1 Do you open 50 connections and make 50 separate requests?
Or do you do six at a time to a back-end server and in what order and when they come in?
Um, do you read the entire response and just flush it through?
How do you apply back pressure?
Um, how do you try and minimize the in-flight data to the client so that you don't have um Too much in-flight and you can't respond to a priority change when like a higher Priority response does come in from a back-end to interrupt something that you are using to fill in data uh, for example, and I mean it's People don't appreciate the complexity I guess Um, and it makes it look easy when we have this simple test that goes.
Hey, it's working or it's not There is a lot of complexity to it.
Um, but hopefully as people are doing implementations they can be aware of what the trade-offs are Yeah, and sorry I just wanted to share my screen here because I was reminded of like some of the work that what we did when pat was with cloudflow was to look at you know, the Fixing prioritization effectively from the server's perspective and making it better And I think one of the aspects of of web page tests compared to dev tools that really helped us there was to look at Uh, you know hp2 is a framing a framed protocol you're delivering data um in multiplex ways But you're getting data frames at different points in time on this waterfall and like what dev tools doesn't help you see What's happening at that network layer, but web page test does so in these examples Um, right.
We we have like this svg is getting loaded Um, if I can make that slightly bigger No There you go Yeah, well, um image quality can can vary on streams So it might look good for us on zoom, but for our viewers, it could be a bit fuzzy Um, so you can see here like we these darker bands Within the single transfer of of the response data here the single bands identify The data frames, is that correct pat?
um, not necessarily individual frames but chunks of data coming in and the width of the the bar is how much data And how long it took and usually like when you see the case where you have an early one and then most of the stuff later um, the headers are usually the first, uh thin line and any of the responses and almost all implementations Uh send headers immediately regardless of the the priorities of the the response bodies and then they prioritize the the actual payloads Um, but then you can see the interleaving and it's not uh limited to hdb2 It does it for hdb1 as well now and it does it across connections So you can see how the incoming data is interleaved as you have multiple concurrent requests and responses and as they span third parties because You know when quick was or speedy was being developed at google Uh, and you mentioned facebook and twitter were sort of all of the initial implementers um Those were like super simplistic cases where you're talking to like a server and so prioritization works great well when it works, um on a single connection, but once you introduce the real world, uh, that is the web and you have 20 or 30 third parties, um prioritization across those doesn't work And you you basically throw all the data on the wire and let the network sort it out And so the chunks really help to see How the responses interleave with each other across connections from different, uh third parties and things like that Yeah, and we've um, you talk about connections.
I scroll down the page slightly below the waterfall where we've got the connection view which um, you know the waterfall had uh, here 97 requests For a single web page, um, which is quite a lot.
But when you when you look at The the mapping of those requests to different domains, you know, I typed in www .Cloudflare.com Because andy told me at the start I was wrong and the first request here was ocsp.
So Point to andy. Um, and then yeah, we've got some domain sharding going on here.
We've got assets on um some stuff um, but but this view shows you just just the connections are all involved, um with loading that page and You know unless there is a thing called connection reuse formerly in h2 also known as connection coalescing Um, so it is possible um for You know different named domains to result in the same underlying network connection I was surprised to see that didn't kick in here yeah, I wonder whether the Connection 4 has got anonymous requests on it.
Whether the anonymous has been forced on those scripts wasn't allowed to force onto a separate connection i'm not sure I don't know, but but this is the the having having the information there lets you see what was happening really Um, like it it's not gonna like come up with a list of oh be sure to check you're doing connection coalescing You almost need to write those kind of checklists yourself or or hire a consultant to help analyze this stuff for you But it lets you reason about things with your own knowledge and then you might go away and say oh, yeah somebody I know some maybe someone's changed the certificate unintentionally and dropped it from the the Whatever the the part of the certificate is the SAN or whatever.
Yeah, um, which I can't remember what the acronym means um, but there's other things and you know, the you mentioned third parties earlier like they can play a big part in the overall performance of your site and You They're different connection.
They're different. Um Administrator of those things they might be using a different protocol version even um, there's so many factors at play here Um, which is why it's important to test And they contend for that last mile bandwidth.
They're competing you know with um Bandwidth from well at least from you to the isp perhaps even beyond there you never know and You know, so one of the one of the great things about This connection waterfall is is you've got quite a simple one But even here you can sort of if we look to say connection five for google optimize quite often you can spot where there are There's been no data transmitted and you can see which other connection has had the data at that point in time So you you can get some Understanding of how your third parties are interfering And just from a network contention point of view from this connection view That introduces some more fun, um when we were speaking about congestion control where you know for optimal h2 Um, maybe quick or something.
Um Dbr or one of the other paste connection, uh congestion control algorithms works really well.
Um, but Uh, there have been like earlier revisions of it if you're not on bbr v2 or whatever, uh would Totally dominate the connection and starve out quick, right?
And so if you've got a third party that's using bbr, but you're using um cubic for your main Uh resources it's entirely possible like a video Uh downloading in the background will completely obliterate your connection and destroy You loading the javascript resources you need or something like that as well And so I mean that's probably a layer below what most web devs are thinking about but it's useful to know these things Um, at least at the level of the the multiple connections, uh congesting or competing with each other yeah, like i've been aware of that, but i've never really thought of um bbr on tcp affecting things so Badly and this is why I think the you know new version upgrade like Three is a higher number than two And two is higher than 1.1, but they're not just turn it on and it just works Um, you know, those are a nice headline titles on the blog post um, I had former colleagues who wrote ones like that in jest that actually You know getting hb2 on for a lot of people was it was wrapped up in enabling ssl and it was incredibly hard Even get to that point in the future.
So it was a major success, but then you get to the point where Okay, now we can run this and test it Oh dear, like it didn't really deliver on its promises and you go through another round of analyzing this stuff and um, you know Matt hobbs has done some great stuff on on gov.uk Which is also already like one of my favorite web pages as a consumer Of just government services in the uk that it was already very focused on like being good um, but the difficulties Or not the challenges that they found and like the assumptions that they'd made That like how you would expect h2 to work if you read the spec and looked at some people's blog posts, but for them Some weirdness had happened and they were able to resolve that which was really cool um, I think like in this in this view that I have here is it Like where can I see what protocol each of these requests was made on if you click on it?
Um, so if you click on that one, uh protocol Yes, um, which is great. So if I wanted to try and reload this using hdb3 by default um, like we said in some previous weeks the Um, the browsers haven't turned this on yet by default So you'd need to override it with like command line flags if I was running this from my machine But I should be able to do that with web page tests as well, right?
Yeah, I mean so web page tests um I was to do a new better or worse worse gives you like almost full access to the command line and flags and um, but it's worth remembering that Web page test at least the the first view, uh starts with a completely clean browser slate and profile and it hasn't learned anything and so Depending on how browsers have implemented hdb3, uh discovery So if it's if it waits until it gets an alt service Um, and it only does hdb3 next time even when browser is implemented by default web page test will always be visiting a page In the first time situation and we'll never have learned that Uh, so you can use command line flags to turn it on and force it for specific domains If they get to the point where they're doing discovery over dns, for example, or something like that, then it should work seamlessly I forget which Okay, uh, I can't remember off the top of my head what the flags are now, um I think enable quick Yeah, and I can't even remember there's I always have to search for peter's website that lists all of the chromium flags basically Because they also like to change from time to time Indeed, um, let me let me figure that out and kick it off in the background while we continue talking Um, and we can see what the results are if if it can load in time um but What else is there we talked about, um devices so, you know How are you what what kind of devices are able to to do with web page test?
Is it just pcs? Are we looking at other things too? so the code is Able to run pcs mobile phones tablets android ios windows linux macos chrome firefox Almost any of the chromium based browsers brave the new edge um safari on ios Doesn't yet do safari on desktop now as far as far as what you can actually test from the public version of web page test which is different from what the The code can technically let you do.
Um Virtually all of the locations are running vms And so it doesn't have a physical gpu.
Um, it's not like running on the bare metal and it's most often uh linux, um It's close enough to to how windows behaves Uh that it it works good for performance testing, but it's also you know Super easy to deploy vms anywhere in the world on linux with the clouds um dullus is You know my house, um, which is the only place where physical devices are currently hosted and so here Desktop, there are a few stacks of laptops.
There's some thinkpads and aspires if you want to do Actual windows testing on real hardware with real gpus um, and then there's uh banks of mobile devices, uh More android devices than there are ios devices and tend to skew towards the lower end of android devices um just to give you uh, the users the the way to to see how their sites are on the We call them mid-range phones There are some of the really low -end phones as well just because most devs tend to have High-end devices and they're used to what the high-end experience looks like and they have no idea how bad mobile cpus can do Um, and then there's a suite of ios devices, um I think iphone 8 is the the newest Ios devices, but I mean the the apple cpus just completely trounce everything on the planet Basically, and so even the ios 8 phones are much faster than just about anything else, uh for testing Interesting so you've got a um A lot of batteries that you have to cycle through on those devices It's getting Yeah, it's I mean they're all Getting to the age right now where the batteries are starting to swell.
Uh, I think they usually last Give or take two or three years, um before they start to fail, uh for at least for the better devices um Sometimes they fail within a year Um, but i've got a fair number of uh swollen batteries just waiting to catch fire down in the basement i'm sure Your wife must love you.
Well, I don't think i've told her yet She she she knows all the phones are sitting there.
I don't think she's aware that they're ticking time bombs ready to blow Yeah, so that's the motor energy or that's what most of the test devices are Um You've got have you still got your alcatel 1x I still have the 1x.
Um, it took it went offline for a little while and I had to Basically reconfigure it and reset it up, but it should be back up now.
That's The alcatel 1x and the original gen 1 moto e are the the two lowest end devices I have Um, and I actually had to extend web page tests Timeouts on a per device basis Um, just because those phones are so much slower than everything else I think testing cnn on them.
For example, the amount of javascript that has to run Uh can take upwards of five minutes, uh to complete when you load the page and I mean that's an experience you don't really think about if you're a developer with like the latest iphones and Cnn cnn still takes a while to load, but it doesn't take five minutes So after after some trial and error I got the right one, um, so the first two were were easy, um because I use those regularly, but it's the origin to force quick on um Parameter which is is the one pat was mentioning which should Oh should work, but it just failed.
So, uh, I obviously got it wrong um You could have got it wrong or canary could be unhappy today.
Um, that's hard maybe Maybe 29 is I don't want to click rerun.
No, uh Because this is kind of one of the challenges of evolving standards, you know, we're still we're still developing the specs so we're releasing new versions not a you know, weekly pace or anything, but the um chrome from different release channels, uh targeting different versions, so i'm going to try uh Hb3 version 27 in this case, which I know.
Um Cloudflare.com still supports, um back to 27.
So that's usually a good point because we go back in time a little bit I think that what this illustrates is that because I said force quick for this domain that You might also want to HTTPS just in case I don't know how force quick and non-https work as far as chrome goes It could it could have tried 480 and then it got itself very confused yes, um, but you know, this is effectively from a user's perspective if they had this kind of Failure, it would have been a silent one It would have gone to the old service upgrade but because we specifically told the browser to basically do this Um it hard failed which which helped because I then didn't have to go through the the waterfall and validate um that those things had happened so Oh quick protocol error.
Okay. That's a new one. Uh Hey, this is what happens when you try to do live live demos Um, but ultimately yeah what we should see is the same kind of page but with you know, maybe a different waterfall Try it again.
Yes on and try the version 29 Okay I mean secretly I arranged this call just so that because i haven't been able to figure this out.
So i'm joking i'm joking I'm having the experts on to guide me through it's like It's uh, even every now and again, I think first time I tried All the the quick demo pages.
I couldn't get one to work And then after a few goes of upgrades it actually worked for me but Yes, it's it's a moving target isn't it and that's the challenge and that's You know part of the challenge for anybody who wants to test right now That's sort of the multiple customers of uh webpage test if you would um So you've got the site developers that are using it to optimize their sites um, but as a chrome developer, I've also used it, uh to try and uh, test chrome and command line flags like this as Uh when we were at Cloudflare when I was at Cloudflare with you as well we'd use it to try and test the the server side deployment and so Um, that's part of why I try not to To be too, uh judgmental or too prescriptive about what the results are Because different people are doing different things with it And clearly they don't always work Yeah, i'm just going to try one last thing where I don't put the force just just to see what happens in this case is the fallback would work or if something like that You'll see Web page tests recently to check how often reprioritization was happening didn't you as part of the Hv3 stuff.
Yeah, so you'll have um added a command line flat or a blink feature flag to turn off http2 prioritization Uh, and so when that went out to canary, uh, the web page test agents picked it up automatically And so we did a a big batch test uh 25 000 sites, I think um All on a server that was known to support prioritization And we tested those 25 000 sites with and without prioritization Enabled from chrome side to see what the impact on the results would be And I mean kind of as we'd have expected, uh largest contentful paint Uh was the metric that showed the biggest change because chrome one of the chrome's Performance optimizations is once it does layout Any images that are within the viewport it boosts the priority.
And so if you're talking to a server that Can handle reprioritization?
uh, you'll get those images loading sooner, especially if they're actually defined later in the html or if they're in a a css background image or something like that and so Basically what came out of the test is largely what we expected sites that had background images that triggered largest contentful paint had big improvements when Reprioritization was supported Ah h3 q o what's q o 50?
Yeah, this this is us fetching some uh assets from a google, uh storage Which is also, you know a third party from from the you know, the domain I typed in um and hq Uh h3 q50 is is effectively google Migrating away from google quick or itf quick.
Um, so they're using some custom alpn identifiers here um, so you can see that yeah, the Enabling the flag has allowed that domain to load it in that way.
Um, something's happened Yeah, um for whatever reason that it didn't work, but in other tests it did I can guarantee people but just go back to the point about um the work that you were doing on the Kind of the data gathering I really want to thank you for that you know as a as an editor of a spec like we we have a question in the itf group of Do people want this feature or not?
And I might have a personal opinion, but you know, my job is to reflect the views of the working group and so there's people who are for and there's people who are against and Having some data really helps us You know look at that and see it's not the only signal Or or data point, but it is very very useful.
I you know that gathering data doesn't come for free um So thanks to the folks that are involved in in enabling that flag and gathering the data and analyzing it and presenting it back to the group with you know links to the raw data so often in the past it's been a Oh, we did a thing and the answer was this and it's it's hard to to validate that Um what the outcome of that discussion is going to be I don't know yet.
So we I don't uh from the working group chairs a Intent to implement email went out and there's been some responses to that so by The next interim in a couple of months, maybe we'll hopefully continue to make progress What's good in the meantime is people have been implementing that signal and trying things out Um, you know, we've been experimenting in quiche Um to add some level of prioritization So like one is this the scheduling and getting smarter on that regardless of what the signal turns out to be the challenge with the reprioritization aspect is that again Due to to the way that hp3 solves the head of line blocking problem because it doesn't It doesn't depend on anything coming in any particular order.
That's a great solution to a problem, but creates its own problem That needs a solution which is if if you get a reprioritization for a thing that you don't know about Yet, what do you do?
You know, here's a signal like this thing should be that okay.
Well, i'll buffer it up for a while Okay. Now we need to describe how much you buffer up It that thing effectively references an object by an id while The quick layer doesn't expose that information.
Although it's like earlier in the call we talked about how quick is gonna Maybe be more user land and expose more information and let you have an integrated kind of solution Some people that's not the way that they want to approach things.
They want more clean layering of the libraries and so like The the api doesn't tell you details about the transport stream because all the stream concepts are taken out of the http layer and put Into the library you still get told that hey you had a request arrived So from a service perspective a request arrived and it has this identifier And you go.
Okay. That's a quick stream id But you might not be able to query how many stream ids am I allowed to use right now?
But that's one way and not all apis are the same.
So some would say oh, it's easy. You just call this function but anyway, that that's That's for me and the working group to solve.
I assume there's got to be some crossing of the streams there um the layers anyway, uh since prioritization was moved into headers, right and so it's at the the priority info is at the application layer, but the actual priority implementation is down in the the quick protocol layer Yeah, uh, yes and no it's it's it's So so the that's on the api Uh the the reprioritization So there's no way to send a header while like you've already had a request that you've sent Like no, I didn't mean reprioritization.
I just meant straight up regular prioritization Um, yes, sure If you if you send it in in along with the headers along with the request There's no problem here because the headers opens that stream up But the problem you get is that if you sent me a request and immediately sent a reprioritization those things are on different streams and that packet that had the request might go missing and then I I get a signal for a thing.
I don't know about yet So I need to buffer it or ignore it But we probably want to respect your reprioritization because that's the most up-to-date information that you sent us and if we ignore it, we might get into a situation where We're ignoring good information and we're delivering something that you don't want which would really upset andy by by the sound of azalea comment I'm just joking.
Um We're Nearing the top of the hour that seems to go very quickly Um, i'd like to just take the chance to to thank you for coming on the show Um giving up your valuable time before I miss it Um, and then i'll give you the final few minutes to make any closing statements But that's that's been really useful.
We probably spend another few sessions on this.
Um Like yeah, I I need I need more practice with web page tests, obviously But yeah anything you'd like to write It was a good book.
Can you recommend?
I mean, thanks for having us.
Um Yeah, and you know not actually implementing any quick protocols.
It's easy for me to assume everything's easy, but um we also need to I mean chrome provides some of the most Uh granular detail that's out there which is why we tend to use chrome most but as a spec author you get to to honor or Live up to the fact that you know, the world is not web pages in chrome at the protocol level, right?
It's all of the browsers all of the servers all of the use cases.
There's video streaming. There's all sorts of other things uh beyond just loading web pages in in Uh chrome and so we appreciate the work you're putting into it ship reprioritization Yeah Okay, thanks.
Thanks for your comment. I'll take it on board Yeah, thanks for having us.
Um, yeah, I think it's I think the one of the biggest challenges we face is you know You've got one group of people building browsers and one group of people building servers and you know, not everybody from from a server and spec side is always aware of Some of the complexities of how browsers behave so as much as me and pat throw rocks about prioritization we're very aware it's you know, it's a complex world and You know, it's it's just about one um publicizing that prioritization is broken, but also probably we need to Help come up with some better test cases that people can run at a a wire level without running a I'm actually running a browser like a lint for the protocol prioritization sort of thing Oh, it sounds amazing.
Um, yeah, we we should we should carry on this call afterwards and sketch something out You can come back next week and have it all done, please thanks.
Yeah No, um cool, well, I think maybe we've got 10 seconds left or whatever but um, no, I think um Yeah, i haven't really got anything more to add thank you again