Originally aired on June 23 @ 12:00 PM - 12:30 PM EDT
Welcome to Cloudflare Speed Week 2023!
Speed Week 2023 is a week-long series of new product announcements and events, from June 19 to 23, that are dedicated to demonstrating the performance and speed related impact of our products and how they enhance customer experience.
Tune in all week for more news, announcements, and thought-provoking discussions!
Read the blog post:
Visit the Speed Week Hub for every announcement and CFTV episode — check back all week for more!
Hello everybody, welcome to Friday. It is the final day of Speed Week 2023. In total today we announced five blogs covering end -to-end Brotli support. There's a great post on Cloudflare Radar's Internet quality feature, which also has a dedicated Cloudflare TV slot following immediately after this, I believe. We also have posts on accelerating APIs with a product called Ricochet, a post on Pages and how it's the fastest way to serve sites, and also a great post on Eurovision and how they built their voting platform on Cloudflare. I'm proud and pleased to say we've got three authors of those posts joining us today on this final slot of the week, and let's start with Matt Bullock on Brotli end-to-end. So Matt has been on the show twice now, I think, at least talking about Observatory on Tuesday, Snippets on Wednesday, and he's back again today, can't get him off, talking about Brotli. So Matt, again, can you just kind of introduce who you are, what your role is, and talk a little bit about Brotli and what it is we're announcing today. Yeah, so I'm Matt Bullock, I am product manager for FL, which stands for Frontline, so all of the configuration you put in Cloudflare on your dashboard or API with a server service that is applying that to your traffic. Yeah, and today I'm going to be talking about Brotli, which we've supported for a long time from Cloudflare to the eyeball, but now we're announcing it from your origin to Cloudflare, so you can now do Brotli end-to-end. So what, yeah, I mean, that's probably the first question, right, what is, if we've had this from eyeball to Cloudflare, so the browser connecting to Cloudflare already, then what's the benefit of having support for it from Cloudflare to the origin to the servers? Yeah, so obviously this week we've talked about how serving bytes and bits as fast as possible is paramount to website performance. The quicker the browser receives the bits and bytes and they complete together, the quicker the user can interact. One way to reduce bytes and bits is compression, so squeezing it, reducing it down as much as possible. Simple way of thinking about this, if you're sending through a parcel through a delivery company, the smaller you can make that parcel, the cheaper it is, and sort of, yeah, you can probably make it ship quicker, you can put it in the post rather than put it on a boat or whatever. So yeah, that's like a big part of this, where we are now allowing customers to compress with Brotli, which is a compression algorithm built by Google, I think around 10 years ago, well, yeah, around 10 years ago, that sort of, we are now allowing people to compress origins and they already do, but they're able to send that to us, a heavily reduced and compressed format that we can then sort of either decompress and sort of apply our features and functionality or serve to the end user. So it's really reducing bandwidth and reducing time for the bits to appear at the eyeball or at Cloudflare. Yeah, I think that last part is worth touching on. So I mean, most people would understand compression is making a file smaller. And obviously, like you say, the main obvious benefit there is, if the file is smaller, it takes less time to send from the server to the browser. But the other element of this is bandwidth fees, right? Can you kind of touch on egress and how compression is going to help save people money? Yeah, so I would say most people that have an origin and some description in the cloud will be charged bandwidth for data leaving their origin to the end user. And that end user could be directly to an eyeball or it could be to Cloudflare. It's anywhere that's sort of asking for the information, the bytes, and then you're sending it down, like there is egress coming from your origin. So if you don't compress those files, again, going back to the analogy of shipping, they're very large, that's going to cost more to ship. So if you can compress that down heavily origin and make that package smaller, then it's less to sort of, it's a smaller file that comes out from your origin. And if you tally that up, you know, 1000 times a million requests, then it becomes a hell of a lot cheaper to serve your content, reducing bandwidth and reducing egress to your origin. So saving money, which is a nice bonus to have in the current market. So yeah, given that, I think the question on most people's mind, and we've already had one person asking on Twitter, like a minute after the post went live, A, who gets access to this? Which plan types get access to this? And B, when do they get access to this? What does the rollout plan look like? So everyone will get this, like we will, so the way this works, there is a HTTP header called accepting coding, which we currently pass to the origin now, and it will just contain GZIP, we just accept GZIP or uncompressed. With the change, we will say we accept GZIP and broadly BR. And as soon as they receive that use, the origin can decide if to send GZIP, uncompressed or broadly. Currently, we have this for free plans enabled in our Canary data centers. So just monitoring, making sure everything's working fine, there's no hiccups, there's no issues. And then in a couple of weeks, we'll enable that globally. So free plans will just see this from all of our data centers with the new accepting coding. And then we'll go pro planning business pretty much a week after that. And then enterprise will follow in a few weeks. So in a month's time, maybe two months, probably hopefully a month, everyone will start seeing this and being able to benefit from it. And yeah, everyone will be able to send us broadly compressed content. Nice, nice, excellent. Yeah, and if you want to learn some more, then head over to the Cloudflare blog. I think it's the top post right now in the section. And yeah, have a read, there's a lot of really interesting information in there. And there's also a really good post at the really good section at the end on custom dictionaries, or kind of shared dictionaries. And that's going to be the next thing we look at for broadly once we roll out. So thank you very much, Matt. Next, we've got Sid, number one show, rookie numbers. Can you start off, introduce yourself again? What is you do here at Cloudflare? And I know for you as to what it is you're announcing today, what your post about? Absolutely. Thank you, Sam. My name is Sid. As you mentioned, I am a principal systems engineer at Cloudflare. I lead the Pages team. So yeah, I'm based in London and I've been at Cloudflare for a little over a year now. Today, we're announcing that Pages is a lot faster than it used to be. It's always been fast. But we've done this massive infrastructural change under the hood. And every single Pages site has gotten up to about 10 times faster for time to first byte. This is all live. It's all that happened under the hood. There's nothing that folks need to do. A lot of this involved, about a couple of months ago, I think we realized that the more deployments you had on your Pages project, the higher the latency was to sort of find the right deployment and drop to it. Then you'd hit a URL. And that resulted in sort of almost punishing users for using the platform a lot more. It seemed pretty backwards. We started working on this problem. And, you know, after talking to a bunch of other teams internally realized that workers for platforms, another core product that we've been working on over at workers, solves this really elegantly. And yeah, that's what we did. We adopted workers for platforms under the hood, migrated every single deployment over. And now, you know, time to first byte is under about 40 milliseconds anywhere in the world. Nice. Yeah. And I was just reading the post earlier and I read those numbers, you know, how we got it from over 600 to 60, 40 milliseconds. So at a high level, can you, if you can explain how moving to workers for platforms provided those numbers, you know, what was it about, about workers for platforms that showed, you know, such incredible numbers? I'm glad you asked. So let's talk about sort of before and after, right? Before all of this happened, the way Pages has worked under the hood is that when you hit a project's sort of domain, you know, the host name, depending on which subdomain you're hitting, which might, you know, map to a preview deployment, or it might map to, you know, an alias deployment or a branch alias, depending on the URL you hit, internally, we have routing tables that we need to look up and iterate through to be able to find the right worker, the right pipeline under the hood to route you to, right? This is how workers work under the hood as well. And this is how Pages started working out. The reason this hasn't been a problem for workers is because not every deployment on workers lives, you know, immutably forever. But on Pages, they do. Every single deployment you make is an immutable URL. It never goes down, which means every single commit you push results in a new one, right? That lookup that I previously mentioned on the routing table is OL. So the larger it gets, the more deployments you have, there's more things to sort of search through and find the right worker, the right pipeline to route to. Workers for platforms flips this equation the other way around. Every worker pipeline is actually hashed with the host name. And that means that the lookup becomes one, which is great, because, you know, you're no longer sort of, you know, it's no longer a function of how many deployments you have. And workers for platforms was designed to do this because, as the name suggests, workers for platforms is built to build large platforms on top of it, right? And yeah, that's how that lookup is no longer ON, it's a one, it's much, much faster, regardless of how many deployments you have. Nice. And then, yeah, I guess the main product of that is performance. And at the bottom of the post today, we talk about comparisons against some of the kind of alternative products like Netlify and Vercel. Can you kind of talk us through what those results were for people who haven't seen the blog and how you went about, you know, testing that and kind of comparing those numbers? Absolutely. So we did a bunch of web page tests, you know, tests across the same site that we deployed over to Netlify and Vercel and Pages and so on. I think a big reason that Pages ended up being faster is also because of just how quick our CDN is, and how many data centers we have. And, you know, that combined with the fact that looking up a Pages project and routing you to the right deployment and, you know, to that worker, you know, all of that put together means that pretty much throughout the world, irrespective if you're using mobile 3G or if you're on desktop cable, your time to first byte is the lowest on Pages for the same site deployed across all these platforms. That's been a huge win for us. And again, it's only going to get faster with other projects like Flame. We intend to shave down many more milliseconds of this. Yeah. Nice. So I guess the last question is, if I want to learn more about Pages and workers for platforms, if I have any questions about those, where should I start? Well, I would read the blog post because the blog post goes in depth into a lot of the things that I've briefly touched upon. But beyond that, our docs are a good place to start because Platforms has developer docs on developers.cloudfly.com, so does Pages. And here, I learn best trying things out. So maybe, you know, build a site, try out Pages on pages.dev, see how you feel. Nice. Yeah. Yeah. I mean, I echo that. I think it's the best way to do it, right? Try it, break it, learn, break it again. Thank you. Thank you very much. John, last person on Cloudflare TV Speed Week. There's an honor in there somewhere. Can you, as usual, just introduce yourself, what it is you do here at Cloudflare and talk a little bit about what you're announcing today? Yeah, Sam, I'm happy to. So my name is John Cosgrove. I'm the product manager for Cloudflare's API Gateway. And customers with API Gateway today gain a lot of API security and management benefits already. One of those things is we catalog your endpoints, like with all the variables and all that kind of stuff. And managing endpoints makes things much simpler when you have it all condensed together. But today we're going to leverage this API knowledge for caching APIs with a new feature called Ricochet. And look, people have been caching static content with Cloudflare for ages, right? But you don't see it that much with APIs. So with Ricochet, our goal is to increase the amount of caching, API caching specifically, on the Internet by making it simpler and safer to cache APIs so that we can all benefit from faster response times and snappier apps. So I guess question one, for people who aren't aware of it, is that what is an API Gateway? You know, why do you need one? Yeah, certainly. So API Gateways act as the front door to your API, where you could have lots of disparate teams managing lots of disparate applications. And you need sort of an altogether way to interact with that API, no matter who's building it or what. It's much like a classic reverse proxy use case with Cloudflare on the web. So API Gateways allow authentication, authorization, security, and really allow API developers a way to keep control and visibility on their APIs. And with Ricochet and the plans to kind of essentially build caching, like you say, into this, it's quite a novel and quite a difficult problem. You know, like we say, from the CDN side of things, thinking about API caching has long been quite a difficult issue to solve, A, because there's a huge amount of dynamic content in there, which kind of makes it impossible, arguably, but B, just because of the kind of, like you say, the changeability of it. So can you talk through what some of the kind of challenges are going to be or kind of have been with caching APIs and kind of how the product might unlock that? Certainly. And dynamic content in APIs kind of go hand in hand. And so you're thinking, well, how could I even start to cache this stuff? And I think our initial use cases center around two things. One, it's really hard to cache APIs where multiple users with mixed authentication hit the same endpoints, where if you cache that data normally, you'd be sending my personal data to Sid, right? And nobody wants that. So we're going to focus on that use case, as well as use cases where, think something like GraphQL, where all the GraphQL requests go to a single endpoint. And the variation in that data is actually in the request body, in the post data. So there, you can have this kind of anonymous access where you can't really cache it unless you do something extra to know which data from the cache key you can respond to with which requests. So I think there's some low-hanging fruit use cases with these two things that we're going to focus on first. So things like weather forecasts, current conditions, airline flight tracking and flight status, live sports scores, things like collaborative tools where everybody's requesting all the same data all at the same time. Like API teams know that these sorts of things would benefit from caching. And I'm sure some teams are caching these APIs, but we really want to open it up to let almost anybody have the benefit from API caching. Yeah. And then I think that's probably the question that everyone's mind now is, how do I get access to this? What's the plans? And I know the blog kind of mentions next year, because obviously this is quite a big challenge to tackle, a difficult challenge to tackle. But what does the roadmap look for this? What are the timelines look like for people? Yeah. I think we've got good line of sight to making this happen. And so right now we're just in a prioritization exercise with all the other great new features we want to ship with API gateway. It's a rather new product for us. So lots of things battling for priority, but I would really love to hear from people out there on whether their use cases are in those ones I just mentioned and if they've thought of something that we haven't. And with all of that in mind, we can really start to get the product going in early 2024 and hopefully deliver it later next year. Yeah. Nice. Yeah. I mean, that'll come sooner than we think, given we're now halfway through 2023. Lovely. Thanks very much for that, John. Very interesting. And yeah, John's asked, is anyone interested to get in touch? So take him up on that offer and talk about API caching. I'm sure it'd be a very interesting chat. And that's it. That's the end of Speed Week 2023 on Cloudflare TV. Obviously, if you're interested in reading more, you can go to the Cloudflare blog, read all the posts. There's about 35 posts, give or take, which is a lot of reading. So I don't recommend you try and do it in one go. Similarly, there's about eight Cloudflare television slots. Again, I don't recommend you watch them all back to back unless you really want to. And there is another session, a final session after this on the Cloudflare radar Internet quality feature, which they launched, which I highly recommend you tune into, learn a bit more and then head over to radar and start clicking around. There is so much data on there. You could quite literally lose hours of your life comparing latencies between continents and ASs and complaining about your latency because you find out it sucks compared to other ISPs in your country. So have a look. Thanks for watching. And yes, have a great weekend. Thanks, everybody. Transcribed by https://otter.ai Transcribed by https://otter.ai Transcribed by https://otter.ai Transcribed by https://otter.ai Transcribed by https://otter.ai Transcribed by https://otter.ai Transcribed by https://otter.ai Transcribed by https://otter.ai Transcribed by https://otter.ai but also extend Layer 7 services to some of our cloud-native products and more traditional infrastructure. I think one of the things that separates Magic Transit from some of the legacy solutions that we had leveraged in the past is the ability to manage policy from a single place. What I love about Cloudflare for Q2 is it allows us to get ten times the coverage as we previously could with legacy technologies. I think one of the many benefits of Cloudflare is just how quickly the solution allows us to scale and deliver solutions across multiple platforms. My favorite thing about Cloudflare is that they keep developing solutions and products. They keep providing solutions, they keep investing in technology, they keep making the Internet safe. Security has always been looked at as a friction point, but I feel like with Cloudflare it doesn't need to be. You can deliver innovation quickly, but also have those innovative solutions be secure. The About You fashion platform has become the number one fashion platform in Europe in the Generation Y and Z. It has been tremendously successful because we have built the technology stack from a commerce perspective, then decided to also make it available to leading fashion brands such as Marco Polo, Tom Taylor, The Founded, and many others. And that's how scale was born. What we see in the market is that the attack vectors are becoming increasingly more scaled, distributed, and complex as a whole. We decided to bring on Cloudflare to ultimately have the best possible security tech stack in place to protect our brands and retailers. We use the Cloudflare Spot Management, Rate Limiting, and WAF as an extra layer of protection for our customers by tackling the major cyber threats that we see in the market. DDoS attacks, credential stuffing, and scalping bots. What we see with a scalping bot here is that they are targeting high-end products and then buying them up within a few seconds. That leaves the customer dissatisfied. They will turn away, purchase somewhere else the product, and thereby we have lost the customer. Generally, before it could take maybe up to half an hour for a security engineer to handle DDoS attacks. Now we are seeing that Cloudflare could help us to stop that in an automatic way. Cloudflare helps us to bring the site performance to the best and ultimately, therefore, create even more revenue with our clients. SageGroup is a leading technology company that helps businesses thrive. We have a global footprint. We service millions of customers and entrepreneurs worldwide. The key security challenges in the industry in cybersecurity, they have been about the uprise of criminal groups. They are becoming more professional with more capabilities, which is causing the companies to have to respond to those threats. Cloudflare has a number of solutions available for a technologist, so it acts almost like a Swiss army knife. Cloudflare bot management provides is the ability of identifying what's human traffic versus automated traffic. It reduces a significant amount of concern around web scraping, around multiple automated attacks that people could launch against your website. Cloudflare Workers, it's a powerful way for you to delegate the computes to the edge. And that has brought immense flexibility for our engineering teams. One of the great things about Cloudflare and how it differentiates itself from other vendors is the Cloudflare Zero Trust approach. In many cases with other vendors, most likely you'd have to overlay those capabilities like reverse proxy, links to a web application file, links to logging and analytics. While with Cloudflare Zero Trust, many of those capabilities are already kind of baked together for you. Which means that you don't have to add complexity to your environment. Sage and Cloudflare, we have a long history, but our relationship has become much stronger over time. Cloudflare has this amazing ability to kind of create capabilities that speaks to the operations team, but also speaks to the security profession. And not many companies can achieve that. Microsoft Mechanics