⚡️ What Launched Today - Friday, June 23
Presented by: Sam Marsh, Sid Chatterjee, John Cosgrove, Matthew Bullock, Nevi Shah, Dirk-Jan van Helmond
Originally aired on September 25, 2023 @ 2:30 AM - 3:00 AM EDT
Welcome to Cloudflare Speed Week 2023!
Speed Week 2023 is a week-long series of new product announcements and events, from June 19 to 23, that are dedicated to demonstrating the performance and speed related impact of our products and how they enhance customer experience.
Tune in all week for more news, announcements, and thought-provoking discussions!
Read the blog post:
- Speeding up APIs with Ricochet for API Gateway
- All the way up to 11: Serve Brotli from origin and Introducing Compression Rules
- Making Cloudflare Pages the fastest way to serve your sites
- How we scaled and protected Eurovision 2023 voting with Pages and Turnstile
Visit the Speed Week Hub for every announcement and CFTV episode — check back all week for more!
English
Speed Week
Transcript (Beta)
Hello everybody, welcome to Friday. It is the final day of Speed Week 2023. In total today we announced five blogs covering end -to-end Brotli support.
There's a great post on Cloudflare Radar's Internet quality feature, which also has a dedicated Cloudflare TV slot following immediately after this, I believe.
We also have posts on accelerating APIs with a product called Ricochet, a post on Pages and how it's the fastest way to serve sites, and also a great post on Eurovision and how they built their voting platform on Cloudflare.
I'm proud and pleased to say we've got three authors of those posts joining us today on this final slot of the week, and let's start with Matt Bullock on Brotli end-to-end.
So Matt has been on the show twice now, I think, at least talking about Observatory on Tuesday, Snippets on Wednesday, and he's back again today, can't get him off, talking about Brotli.
So Matt, again, can you just kind of introduce who you are, what your role is, and talk a little bit about Brotli and what it is we're announcing today.
Yeah, so I'm Matt Bullock, I am product manager for FL, which stands for Frontline, so all of the configuration you put in Cloudflare on your dashboard or API with a server service that is applying that to your traffic.
Yeah, and today I'm going to be talking about Brotli, which we've supported for a long time from Cloudflare to the eyeball, but now we're announcing it from your origin to Cloudflare, so you can now do Brotli end-to-end.
So what, yeah, I mean that's probably the first question, right, what is, if we've had this from eyeball to Cloudflare, so the browser connecting to Cloudflare already, then what's the benefit of having support for it from Cloudflare to the origin to the servers?
Yeah, so obviously this week we've talked about how serving bytes and bits as fast as possible is paramount to website performance.
The quicker the browser receives the bits and bytes and they complete together, the quicker the user can interact.
One way to reduce bytes and bits is compression, so squeezing it, reducing it down as much as possible.
Simple way of thinking about this, if you're sending through a parcel through a delivery company, the smaller you can make that parcel, the cheaper it is, and sort of, yeah, you can probably make it ship quicker, you can put it in the post rather than put it on a boat or whatever.
So yeah, that's like a big part of this, where we are now allowing customers to compress with Brotli, which is a compression algorithm built by Google, I think around 10 years ago, well, yeah, around 10 years ago, that sort of, we are now allowing people to compress origins, and they already do, but they're able to send that to us, a heavily reduced and compressed format that we can then sort of either decompress and sort of apply our features and functionality or serve to the end user.
So it's really reducing bandwidth and reducing time for the bits to appear at the eyeball or at Cloudflare.
Yeah, I think that last part is worth touching on.
So, I mean, most people would understand compression is making a file smaller.
And obviously, like you say, the main obvious benefit there is, if the file is smaller, it takes less time to send from the server to the browser.
But the other element of this is bandwidth fees, right?
Can you kind of touch on egress and how compression is going to help save people money?
Yeah, so I would say most people that have an origin and some description in the cloud will be charged bandwidth for data, leaving that origin to the end user.
And that end user could be directly to an eyeball, or it could be to Cloudflare.
It's anywhere that's sort of asking for the information, the bytes, and then you're sending it down, like there is egress coming from your origin.
So if you don't compress those files, again, going back to the analogy of shipping, like they're very large, that's going to cost more to ship.
So if you can compress that down heavily origin and make that package smaller, then it's less to sort of, it's a smaller file that comes out from your origin.
And if you tally that up, you know, a thousand times a million requests, then it becomes a hell of a lot cheaper to serve your content, reducing bandwidth and reducing egress to your origin.
So saving money, which is a nice bonus to have in the current market.
So yeah, given that, I think the question on most people's mind, and we've already had one person asking on Twitter, like a minute after the post went live, A, who gets access to this?
Which plan types get access to this?
And B, when do they get access to this? What does the rollout look like?
So everyone will get this. So the way this works, there is a HTTP header called accepting coding, which we currently pass to the origin now, and it will just contain GZIP.
We just accept GZIP or uncompressed. With the change, we will say we accept GZIP and Brotli, BR.
And as soon as they receive that, the origin can decide if to send GZIP, uncompressed or Brotli.
Currently, we have this for free plans enabled in our Canary data centers.
So just monitoring, making sure everything's working fine.
There's no hiccups, there's no issues. And then in a couple of weeks, we'll enable that globally.
So free plans will just see this from all of our data centers with the new accepting coding.
And then we'll go pro planning business pretty much a week after that.
And then enterprise will follow in a few weeks.
So in a month's time, maybe two months, probably, hopefully a month, everyone will start seeing this and being able to benefit from it.
And yeah, everyone will be able to send us Brotli compressed content.
Nice, nice. Excellent. Yeah.
And if you want to learn some more, then head over to the Cloudflare blog. I think it's the top post right now in the section.
And yeah, have a read. There's a lot of really interesting information in there.
And there's also a really good post at the really good section at the end on custom dictionaries or kind of shared dictionaries.
And that's going to be the next thing we look at for Brotli once we roll it out.
So thank you very much, Matt. Next, we've got Sid, number one show, rookie numbers.
Can you start off, introduce yourself again? What is you do here at Cloudflare?
And an overview as to what it is you're announcing today, what your post's about?
Absolutely. Thank you, Sam. My name is Sid. As you mentioned, I am a principal systems engineer at Cloudflare.
I lead the Pages team. So yeah, I'm based in London and I've been at Cloudflare for a little over a year now.
Today, we're announcing that Pages is a lot faster than it used to be.
It's always been fast, but we've done this massive infrastructural change under the hood.
And every single Pages site has gotten up to about 10 times faster for time to first byte.
This is all live. It's all sort of happened under the hood. There's nothing that folks need to do.
A lot of this involved about a couple of months ago, I think we realized that the more deployments you had on your Pages project, the higher the latency was to sort of find the right deployment and drop to it.
Then you'd hit a URL.
And that resulted in sort of almost punishing users for using the platform a lot more, which seemed pretty backwards.
We started working on this problem.
And after talking to a bunch of other teams internally, we realized that workers for platforms, another core product that we've been working on over at Solves this really elegantly.
And yeah, that's what we did. We adopted workers for platforms under the hood, migrated every single deployment over.
And now, time to first byte is under about 40 milliseconds anywhere in the world.
Nice. Yeah. And I was just reading the post earlier, and I read those numbers, how we got it from over 600 to 60, 40 milliseconds.
So at a high level, can you, if you can, explain how moving to workers for platforms provided those numbers?
What was it about workers for platforms that showed such incredible numbers?
I'm glad you asked. So let's talk about sort of before and after, right?
Before all of this happened, the way Pages has worked under the hood is that when you hit a Pages project sort of domain, the host name, depending on which subdomain you're hitting, which might map to a preview deployment, or it might map to an alias deployment or a branch alias.
Depending on the URL you hit, internally, we have routing tables that we need to look up and iterate through to be able to find the right worker, the right pipeline under the hood to route you to.
This is how workers work under the hood as well. And this is how Pages started working out.
The reason this hasn't been a problem for workers is because not every deployment on workers lives immutably forever, but on Pages, they do.
Every single deployment you make is an immutable URL. It never goes down, which means every single commit you push results in a new one, right?
That lookup that I previously mentioned on the routing table is OL.
So the larger it gets, the more deployments you have, there's more things to sort of search through and find the right worker, the right pipeline to route to.
Workers4Platforms flips this equation the other way around.
Every worker pipeline is actually hashed with the host name.
And that means that the lookup becomes one, which is great because it's no longer a function of how many deployments you have.
And Workers4Platforms was designed to do this because, as the name suggests, Workers4Platforms is built to build large platforms on top of it, right?
And yeah, that's how that lookup is no longer a when, it's a one, it's much, much faster, regardless of how many deployments you have.
Nice. And yeah, I guess the main product of that is performance.
And at the bottom of the post today, we talk about comparisons against some of the kind of alternative products like Netlify and Vercell.
Can you kind of talk us through what those results were for people who haven't seen the blog and how you went about testing that and kind of comparing those numbers?
Absolutely. So we did a bunch of web page tests across the same site that we deployed over to Netlify and Vercell and Pages and so on.
I think a big reason that Pages ended up being faster is because of just how quick our CDN is and how many data centers we have.
And that combined with the fact that looking up a Pages project and routing you to the right deployment and to that worker, all of that put together means that pretty much throughout the world, irrespective of if you're using mobile 3G or if you're on desktop cable, your time to first byte is the lowest on Pages for the same site deployed across all of these platforms.
That's been a huge win for us. And again, it's only going to get faster with other projects like Flame.
We intend to shave down many more milliseconds of this.
Yeah. Nice. So I guess the last question is, if I want to learn more about Pages, workers for platforms, if I have any questions about those, where should I start?
Well, I would read the blog post because the blog post goes in depth into a lot of the things that I've briefly touched upon.
But beyond that, our docs are a good place to start because Platforms has developer docs on developers .cloudfly.com, so does Pages.
And hey, I learn best trying things out. So maybe build a site, try out Pages on pages.dev, see how you feel.
Nice. Yeah. I mean, I echo that.
I think it's the best way to do it, right? Try it, break it, learn, break it again.
Thank you. Thank you very much. John, last person on Cloudflare TV speed week.
There's an honor in there somewhere. Can you, as usual, just introduce yourself, what it is you do here at Cloudflare and talk a little bit about what you're announcing today?
Yeah, Sam, I'm happy to. So my name is John Cosgrove. I'm the product manager for Cloudflare's API Gateway.
And customers with API Gateway today gain a lot of API security and management benefits already.
One of those things is we catalog your endpoints, like with all the variables and all that kind of stuff.
And managing endpoints makes things much simpler when you have it all condensed together.
But today, we're going to leverage this API knowledge for caching APIs with a new feature called Ricochet.
And look, people have been caching static content with Cloudflare for ages, right?
But you don't see it that much with APIs.
So with Ricochet, our goal is to increase the amount of caching, API caching specifically, on the Internet by making it simpler and safer to cache APIs so that we can all benefit from faster response times and snappier apps.
So I guess question one for people who aren't aware of it is that what is an API Gateway?
Why do you need one? Yeah, certainly. So API Gateways act as the front door to your API, where you could have lots of disparate teams managing lots of disparate applications.
And you need sort of an all together way to interact with that API, no matter who's building it or what.
It's much like a classic reverse proxy use case with Cloudflare on the web.
So API Gateways allow authentication, authorization, security, and really allow API developers a way to keep control and visibility on their APIs.
And with Ricochet and the plans to essentially build caching, like you say, into this, it's quite a novel and quite a difficult problem.
Like we say, from the CDN side of things, thinking about API caching has long been quite a difficult issue to solve.
A, because there's a huge amount of dynamic content in there, which makes it impossible, arguably.
But B, just because of the changeability of it.
So can you talk through what some of the challenges are going to be, or have been, with caching APIs and how the product might unlock that?
Certainly. And dynamic content in APIs kind of go hand in hand. And so you're thinking, well, how could I even start to cache this stuff?
And I think our initial use cases center around two things.
One, it's really hard to cache APIs where multiple users with mixed authentication hit the same endpoints.
Where if you cache that data normally, you'd be sending my personal data to Sid, right?
And nobody wants that. So we're going to focus on that use case, as well as use cases where, think something like GraphQL, where all the GraphQL requests go to a single endpoint.
And the variation in that data is actually in the request body, in the post data.
So there, you can have this kind of anonymous access where you can't really cache it unless you do something extra to know which data from the cache key you can respond to with which request.
So I think there's some low hanging fruit use cases with these two things that we're going to focus on first.
So things like weather forecast, current conditions, airline flight tracking, and flight status, live sports scores, things like collaborative tools where everybody's requesting all the same data all at the same time.
Like API teams know that these sorts of things would benefit from caching.
And I'm sure some teams are caching these APIs, but we really want to open it up to let almost anybody have the benefit from API caching.
Yeah. And then I think that's probably the question that everyone's mind now is, how do I get access to this?
What's the plans? And I know the blog kind of mentions next year, because obviously this is quite a big challenge to tackle, a difficult challenge to tackle.
But what does the roadmap look for this?
What are the timelines look like for people? Yeah. I think we've got good line of sight to making this happen.
And so right now we're just in a prioritization exercise with all the other great new features we want to ship with API Gateway.
It's a rather new product for us. So lots of things battling for priority, but I would really love to hear from people out there on whether their use cases are in those ones I just mentioned, and if they've thought of something that we haven't.
And with all of that in mind, we can really start to get the product going in early 2024 and hopefully deliver it later next year.
Yeah. Nice. Yeah. I mean, that'll come sooner than we think, given we're now halfway through 2023.
Lovely.
Thanks very much for that, John. Very interesting. And yeah, John's ask, is anyone interested to get in touch?
So take him up on that offer and talk about API caching.
I'm sure it'd be a very interesting chat. And that's it. That's the end of Speed Week 2023 on Cloudflare TV.
Obviously, if you're interested in reading more, you can go to the Cloudflare blog, read all the posts.
There's about 35 posts, give or take, which is a lot of reading.
So I don't recommend you try and do it in one go.
Similarly, there's about eight Cloudflare television slots. Again, I don't recommend you watch them all back to back unless you really want to.
And there is another session, a final session after this on the Cloudflare radar Internet quality feature, which they launched, which I highly recommend you tune into, learn a bit more, and then head over to radar and start clicking around.
There is so much data on there.
You could quite literally lose hours of your life comparing latencies between continents and ASs and complaining about your latency because you find out it sucks compared to other ISPs in your country.
So have a look.
Thanks for watching. And yes, have a great weekend. Thanks, everybody.