Cloudflare TV

Cloudflare’s Speed Week. Where we’re going, we need speed

Presented by John Graham-Cumming, João Tomé
Originally aired on 

Welcome to our weekly review of stories from our blog and other sources, covering a range of topics from product announcements, tools and features to disruptions on the Internet. João Tomé is joined by our CTO, John Graham-Cumming.

In this week's program, we cover some of the announcements, metrics, and deep dives from our innovation week, Speed Week. It's all about how Cloudflare likes to be very fast, the advantages of that performance focus in different products, from Zero Trust to Workers, APIs and Pages, and how we like to prove that we're fast. We benchmark ourselves against our competitors and our previous metrics.

There's something for everyone's taste, from our global network growth — we're in 300 cities and getting closer to end users with connections to 12,000 networks (in 122 unique countries) — to how we use scalable machine learning to reduce the processing time for each HTTP request.

We introduce new tools (Cloudflare Observatory and Cloudflare Snippets), explore new architectures (rethinking cache purge), and even talk about our step-by-step guide to transferring domains to Cloudflare, now that Google Domains is shutting down.

Lastly, we present a world map predominantly orange, highlighting that Cloudflare is the fastest provider based on TCP Connection Time. We also delve into Cloudflare Radar's new Internet and Connection Quality index, which offers comprehensive metrics worldwide, categorized by country and ASN.

For all the blog posts and Cloudflare TV related segments, don’t miss the Speed Week Hub ( )

Speed Week

Transcript (Beta)

Hello everyone and welcome to This Week in Net. It's the June 23rd, 2023 special edition about our Speed Week, full of announcements and performance metrics in all sorts of blog posts.

I'm João Tomé based in Lisbon, Portugal. With me I have as usual our CDO John Graham-Cumming.

Hello John, how are you? I'm fine. I'm staying inside out of the heat.

It should be very hot in Lisbon today, right? Today it is, that's true.

More than 35 degrees Celsius. That's what they said, yeah. Sorry, but it's been a hot week on the Cloudflare blog as well, hasn't it?

That's a good transition.

A hot week and a very quick week in a sense, because it's all about speed and performance.

It is. Actually, there's this typical thing that we have a bunch of blog posts mentioning this, that is the duration of a blink of an eye is on average like 100 milliseconds.

Something like that, yeah. And that's referenced in a lot of blog posts over the years, mostly because to express about latency in terms of the Internet.

And the blink of an eye is too slow for the latency we want to achieve, which is something interesting.

Yeah, from an Internet perspective, the blink of an eye is actually pretty slow, right?

It's actually a long time for you to blink.

But yeah, sometimes we use it as a comparison. Exactly. So, very busy week in terms of blog posts, all about, and we already set it up last week, all about performance metrics, announcements, new products related to speed.

It's our innovation week about speed.

In terms of the global week, and it's not over yet, we're on Friday and today a few blog posts are coming out also.

But also a few on Monday, including the wrap up where we show all the blog posts that were published.

But what is the main feature in terms of what people should take from this week?

I think probably it's that we like to be very fast and we like to measure ourselves and prove that we're fast.

So, as well as doing our best to be fast, we like to benchmark ourselves against our competitors, against ourselves to make sure we're getting faster over time.

And you see that in our general network performance update.

You'll see that in blog posts coming out today. You'll see that in our Zero Trust performance comparisons.

We think that because of the architecture of our network with the 300 cities worldwide, we have an architectural advantage in terms of performance.

And we like to demonstrate that, we like to make sure that's true.

And so speed week, there's a lot of work around that. There's also other stuff around measurement.

There are new ways of measuring the web. So, in terms of core web vitals, there's something called INP, which is a blog post about.

But there's lots of other stuff about how we make our system fast, how we make it operate.

And so, hopefully, if you're into speed, this is the week for you.

Lots of different products and announcements. True. Where should we start our journey through the 33 blog posts already published until Friday and a few more coming?

Goodness. Why don't we start in Orpheus and the underworld? Oh, Orpheus automatically routes around bad Internet weather.

Love this title. Yes. So, Orpheus is a system that we built at Cloudflare to help make sure that if Cloudflare needs to reach a backend of a website or an API or something, we can get there.

And you might think, well, that's the Internet's job to do that, right?

The Internet's job is to figure out the routing and get you to that location.

But at the network level, where we're connected all over the world now, I think it's 12,000 networks we're connected to.

You can have a problem getting to an origin server, which you might not have if you actually went by a different route across the Internet.

And what Orpheus does is it looks at Internet weather, i.e., weather problems on the Internet, and knows how to route around those problems.

And so it's an automatic system that has greatly reduced the number of what those 522 errors you just showed on screen there and also improved the experience for our customers and their customers.

Because it's incredibly annoying if your backend is working and your customer can't get to your backend because something's odd in the Internet between you.

And it's especially annoying if that thing could be routed around.

There's a bit like a traffic jam somewhere and you could go around it.

And so that's what Orpheus is about.

We have this incredible scale. And so we're able to figure out, well, we can't go this way.

We'll go this way to get to the origin server. And so that's what Orpheus is all about.

It's been in production for a long time, and now we're writing about how it operates.

There's a specific example here about users in Tampa receiving 522 errors when connecting to Chicago and what Orpheus does.

Yes, buying dog toys, I think, online.

Exactly. Yeah, a lot of examples here. And I really love there's a sentence here all about what we usually say, Internet being a network of networks, but a massive and unpredictable network of networks.

So it's unpredictable because it's complex because it's so many networks.

So it makes sense to have this weather of the Internet changes.

Yeah, exactly. Where should we... Let's just talk about machine learning because machine learning is such an important topic.

There's a couple of blog posts here about how we run machine learning inference in microseconds and how we run...

There's another one, which is every...

Let's do the one that's every request, every microsecond. And there's actually this dichotomy in terms of AI.

It's on the news, but machine learning is part of AI in a sense, right?

Well, yeah. I mean, I think perhaps what we're thinking about when we start talking about AI tends to be a little bit more towards things like the chat GPT kind of use of machine learning.

And machine learning itself is perhaps a slightly larger topic.

But whether you call it AI or machine learning, I mean, machines making decisions and helping us at speed is very, very important.

If you think about Cloudflare, we do about something like 46 million HTTP requests per second.

And so there's a large amount of scale. And we want the latency on those requests to be as low as possible because this is speed week, right?

We want speed as well as security. So Boshara here is writing about the bot management.

So trying to detect whether something is a bot. And we want to do it at high speed and low latency.

And so this blog post is all about how we spent a lot of time optimizing a system we built to do machine learning to recognize whether a visitor is a machine or a bot or a human.

And so if you're interested in the details of making something like this work at our scale and at our latency requirements, you see here he says the system he first built called Gagarin had a median latency of 200 microseconds.

So I think that we're really trying to get down.

But he says P99 was 10 milliseconds, which is a lot to add to the latency of a request.

So read all about how we did optimization. And the other blog post is also about optimization of machine learning work.

I think it's up from there, isn't it?

It is, yeah. Yeah, there is that one. And this, again, this one dives into some code.

And so if you take a look at how we do this stuff, we're really not just optimizing the algorithms, but also everything around them.

And so if you scroll down, you're going to see we move stuff away.

And you'll start to get into not doing allocations because memory management is expensive.

And then which algorithms you're going to use.

And so you dig into the real details. OK, how do you make this stuff really fast at our scale?

So, yeah, two blog posts there about how we make machine learning essentially almost instantaneous on our vast network.


And, again, it's on the news, but it's something all like machine learning and its use on Clover, but in so many companies and so many details and tools.

I mean, we're all using machine learning systems all the time, actually.

We just don't realize that many things are making decisions for us.

I mean, a sort of simple example is, you know, it's probably the case that your phone photo gallery allows you to search by the thing that you're searching for as you type in cat.

And it will find you all the pictures of cats you've taken.

That's a great example of machine learning just sort of becoming part of your daily life.

Exactly. It's on Netflix.

It's on Uber when you use Uber. All of these things. All of those things. Yeah.

And we also announced that our global network is a little bit more global.

It's reaching more than 12,000 networks. That's a lot of networks in 300 cities.

That's right. And so, you know, one of the key things about the Cloudflare network is that it is everywhere in the world.

So where our customers' customers are, we are close to those customers.

That's one way in which we get speed by being near the users.

And, you know, it makes a big difference to be close to people because the speed of light can't be made faster.

So your only real choice is to either you do less, which is what caching is all about, so you don't go such a long distance, or you get close to the end user.

So now we've added a bunch of new cities.

We're now over 300 cities, 122 unique countries. And obviously the connectivity to the Internet is very important as well, which is those networks.

And, you know, the Internet is a network of networks, as we were saying earlier on.

And you have different choices about how you reach any network in the world.

One way is to be directly connected. And if you're directly connected, then you get lower latency and you get reliability.

And it gives you also options in terms of different paths through the Internet to get somewhere.

So we've now gone to 12,000 networks.

And actually, there's a striking statistic in here about how many networks could we be connected to, right?

If you were actually directly connected to all of them.

And if you look at the statistics in here, there's something, you know, we're connected to, it says today, 12,372.

And so we're connected to about one third of all of the networks in the world.

And we're going to keep going.

So you imagine what that means in terms of the performance and the reliability of our systems.

So, you know, a great update to the scope of Cloud Player's network.

The other thing is, because our network, every machine runs every service.

When we open in a new location, everything gets faster for the end users.

No matter what. All the types of services. Everything. Yeah. Exactly.

And very different products from workers. Zero Trust. Developers. Zero trust.

Yeah. Very different products. Security, performance, all of those things.

Everything. And also, because we're running a single stack, we have to optimize everything to be fast, right?

There shouldn't be a tradeoff where it's like, well, if I use the WAF, it's going to add half a second or some nonsense.

No. It's all optimized to be one latency.

Exactly. And there's a list of new cities where we have now data centers.

Yep. Including the US. And we all know that the US has some problems in terms of connectivity.

More data centers means better Internet in different...

And lots of big cities as well, right? I mean, you want to be where the population is.

There's a sort of simple calculation in some ways.

There's more complicated calculations to do with the architecture of the Internet where it's good to be connected.

But just being near people helps you get improved latency for more people.

And that's what we're all about. Where should we go next?

Goodness. There's so much this week that we could go over. Let's scroll up a bit.

Let's scroll up a bit and see because we're looking at some early things.

So, smart hints. That's a really interesting feature. So, we are going to use the power of our network to figure out how to hint to end users' browsers about what they should load next.

So, we have this thing called early hints. And early hints are a mechanism for telling a web browser, you should probably go download this image or this CSS now because you're going to need it in a few milliseconds.

And the thing is, we're in the middle of the communication in some ways between the browser and the server that ultimately provides the content.

Sometimes we're caching it.

But either way, we know about the content of web pages.

And so, smart hints is a tool we're introducing that allows us to look at the traffic to a website and determine, well, we know that if the user requests this particular page, we should hint automatically that they should download, let's say, this hero image or something else.

And that will actually really improve things like the largest content for paint on the website and make it a much smoother experience for the end user.

So, yeah, read all about smart hints. And if you're interested in being part of it, I think there's a sign up at the end of that blog post.

There is. And this is quite interesting in terms of what something like this can provide.

I remember this blog post from last year, I think, where we were, for example, explaining how Shopify and others take leverage in terms of these technologies in a sense.

Yeah, absolutely. It seems to make a really big difference.

And obviously, Shopify did a really good job of optimization using them and got a big, yeah, look at this, 10% faster, 7% increase in conversion.

Yeah, exactly. It's a real world impact of something that seems like in the back end, but it has a real world impact for a company, in this case, Shopify.

And we can automate it. Where should you go next? Should we go to Koffler Observatory?

We can go to Observatory. You tell us about Observatory.

This is a new product we've introduced, right? So, I mean, this is all about- What is Koffler Observatory?

Well, so we have the various things within the dashboard about the performance of the website or web application you're running.

And Observatory is we're bringing it all together into a new interface where you can really understand the impact of a change of what's happening with the website or your application around the world.

So it's a one location where you'll be able to look at all of this, all of those speed-related items in one place.

So you get the run data, you get the core web vitals, you get metrics.

So it's really we're bringing it all into one place now called the Observatory.

And so hopefully this will let people really be able to dig into the website and understand what's happening and improve things.

It's quite important for people to understand what is happening in their websites in terms of analytics.

It joins all of those metrics together, right?

Absolutely. And I think that we've built this out over time. We've now introduced using Lighthouse scores as well and automatic recommendations.

So this should be a one-stop place to go and figure out how to improve a website's performance and also how it's performing in different regions of the world as well, right?

So you may want to understand the difference in score in Europe, U.S., South America, Asia, whatever.

So it's all part of the service in Observatory. There's also here about plans and what regions are supported.

A lot of information for those who want to explore a little bit more about this.

And it's available now. Yeah, it sure is.

Sure is. There's also a blog post about how to use ColdFlare Observatory.

Yeah, a bit of a dive into how to use it. Should we go over there or move on?

People can read that if they want to go into how. It's like a bit of a how -to.

So let's scroll. Let's keep going. Exactly. There's also HTTP tree prioritization.

Yes. It's a new part of HTTP tree that we're supporting called Extensible Priorities.

We're announcing support for it today. So we always try to stay up-to-date with protocols.

We never lag. That's one of the things you can be assured that if you're a Cloudflare customer, we know what's happening in terms of changes to Internet.

We probably were involved in the standardization of it, and we've rolled it out.

And so Extensible Priorities is a new standard. It's rolled out on our network.

And so this allows us, and this fits in with some of the other things.

You can think about smart hints into the browser. This is now at the protocol level.

The priority on what gets delivered in what order can be changed at the protocol level.

So it's a nice feature. And I think the thing that's important, again, here is that we want to make sure everything's up-to-date.

So as browsers support this, and then we're able to use those priorities to speed things up.

Yeah, look at this, 37% improvement in LCP. And we've done benchmarking on our own properties using it, and you can see a really big improvement.

Like I think there's an improvement here with the blog, right?

They talk about the blog, and they looked at how much difference it made to actually prioritize the things that were needed to display the web page and then delay the things that were able to come later.

So it's there. You can try it out today, and hopefully this is useful. And again, Klabler was involved in the standardization of this, and as always, in the actual rollout of it.

So if you want to know the details, it's a very detailed blog post from Lucas and the folks there, which explain how this stuff works.

It's all about a part of the Internet, like improving protocols.

Improving protocols, making things faster, yeah.

And that percentage is quite amazing to see already a new protocol showing up.

A very big difference, yeah. 37%. Where should we go next?

I think the time to first byte article is kind of interesting on the left there.

So time to first byte is this measurement, which is the time between the moment your browser, or the thing you're using, requests something on the web, and the moment it gets back the first byte of the response.

So it measures, in some part, the network delay, and also some of the processing delay on the server.

It sort of gives you a sense.

And it's often being used as a kind of responsiveness test, like how good is your time to first byte?

And I think this thing is interesting here is, well, first of all, time to first byte can be gained, because you can actually have your server send back a byte, and then a few bytes, and then hang around for the rest of it, which is silly.

But it's also a little bit decoupled from the measurements that you really care about, which are, when was the web page painted?

When was the web page interactive? When was the user actually able to use the website?

And so often, people will optimize TTFB and not necessarily get the response they want on the website.

And we looked at this. This explains the background.

We looked at some actual data about correlation between TTFB and things like LCP.

And what you see is that you can easily have a good time to first byte and a bad largest contentful paint.

And we did measurements. So we used actual real data from real customers sending stuff out there.

And so I think the real thing is we really feel like the RUM stuff, the real user monitoring, is way more important than TTFB.

We can probably optimize TTFB for you anyway, because we're caching CDN, right?

If the HTML is caching. But we really care about these core web vitals.

And so it's a discussion of why it's not necessarily the case that improving TTFB matters.

Because we do see cases where it's like, you have good TTFB and bad paint, right?

So it's not magic if you make TTFB better. Exactly.

It says something, but it's not the best metric in this case. We now have better metrics to make sure that you're getting the most out of things.

Yeah. It has a lot of good explanations, percentages here.

Talks a lot about it, yeah.

And there are cases where TTFB might make sense, where if you're not really dealing with a web browser.

So I think it's worth looking into what real data looks like.

And we should definitely be looking at core web vitals for what we optimize.

Makes sense. Yeah. I really love that name, Time to the First Byte. As a name, it's a good name.

It's simple. It's simple to understand. It's direct. Yeah. And you have the network part and you have the server part.

But it's not really a good proxy for this web page loaded fast.

It's not a good reflection of real world performance, right?

Yeah, that's right. That's the main thing. That's right. A lot of new things also.

For example, Cloudflare Snippets is now available in alpha.

Snippets is where you can put a little tiny bit of JavaScript within the Cloudflare dashboard.

So not being a full Cloudflare worker, but you want to do something in the request flow that you can't do in the language we have.

We have this language for manipulating requests.

But you want to do something special. So now you can throw in a little bit of JavaScript to do something.

For example, suppose you wanted to do some sort of A-B test and you wanted to say 10% of the requests go to a particular backend.

Well, you can just write a little bit of JavaScript with math.rand in it and just do that within our dashboard.

So that's now available.

We announced that a little while back. So go do that if you need to do things.

And because it's the Speed Week, we also have a spotlight on Zero Trust.

And we show how we're fastest and try to prove it. So this blog post shows our metrics in terms of proving that claim that we're the fastest.

Yeah, there's a bunch of testing against Zscaler, Netscope, and Palo Alto networks.

Looking at secure web gateway, looking at the access, the Zero Trust access.

And just looking at the response times and how much difference the Cloudflare network makes, that connectivity, the optimization of our software.

And in particular, I think our browser isolation is just ridiculously fast compared to the competition because of the way in which we architected it, because of the way in which we rolled it out across our network.

So yeah, if you're interested in Zero Trust, I don't think Zero Trust should come at a performance penalty.

And I think it does with a lot of competitors.

And we've really gone through and measured from all over the world, different scenarios to show you that the real thing we think is that poor performing Zero Trust is actually a security problem because employees will try to get around it.

Because they don't want to have to deal with the slowness. Because it's getting in the way of their job.

And so we want it to be fast and available.

And that's what it is. And so if you want to get into all the details of how we're faster and where we're faster, you can find out.

Yeah, and pretty much everywhere.

Yeah, a lot of data here in terms of the global scenario. To your point, if the experience is not good, usually employees will possibly disconnect when they're not needing it really to access an application.

And that's not a good perspective.

That's not good. Or there's a productivity hit, right? Exactly, exactly.

And VPNs are even worse, I think. Yep, all VPNs are part of that, yeah.

So that's Zero Trust for you. Should we go to Cache Purge? Yeah, I mean, we've completely re-architected our Cache Purge mechanism.

So the way in which Cache Purge used to work was you would make an API call saying, let's say, purge this URL.

And that would go to a core data center.

And that core data center would go out and talk to every data center in Cloudflare and say, get rid of this, get rid of this, get rid of this.

And that was slow. And that was the original Cloudflare design right from the very beginning.

Very simple when we had like five data centers. Now we have 300.

That just doesn't work. So what we did was we re-architected this in a very interesting manner, which is that using Cloudflare Workers and durable objects, there is no single point at which the purge starts.

If you start a purge in Australia, it will fan out from Australia.

If you start one in Europe, it will fan out from Europe.

And first of all, that's architecturally more interesting because it means that any part of the system can start is where the API call happens.

But even more importantly than that, the fact that these purges fan out like this, it means you can get incredibly fast purging in a region.

And we know from data that you're talking like under 50 milliseconds.

So, for example, let's suppose you're a European customer and all of your customers are in Europe.

You'll no doubt want your assets purged globally, but most of your clients will be in Europe.

So what happens if you do the purge in Europe, it's likely that under 50 milliseconds, every European data center has been cleared out.

And then globally, it's something like 200 milliseconds.

So that architectural change is something that we've worked on.

There's an early post about this by Alex, and now we've updated it.

This is in production. So we have this very, very fast purging. It also gives you a sense of how fast our control plane is because you're talking about sub 200 milliseconds globally or something like that.

Our control plane is able to send messages across this massive global network very, very quickly.

And one of the interesting tradeoffs is if you have a small network, like we did in the beginning, you only have a small number of data centers.

Doing something like this is relatively easy because you only want to talk to a small number of them.

As you get bigger, it becomes more challenging to do a global purge and do it fast.

Hence the re-architecting.

Exactly. Makes sense. Again, always learning. And this is the first blog post you mentioned, the part one from last year, actually.

More than a year ago, that was the first approach here.

And we still have a few more things to go over.

Why not go to Brotli? Brotli, yes. So Brotli is a compression system created by Google originally.

And the interesting thing about Brotli is that it provides better compression than GZIP in many scenarios.

GZIP is widely used on the Internet for compression. Compression is enormously important for transmitting stuff over the Internet because it reduces number of bytes, increases response time.

But Brotli is quite expensive to do on the compression side of things.

And so we do Brotli at level four. There's actually 12 levels of Brotli compression.

And we looked at this a while back, a few years ago, and opted to do compression level four.

I felt a good balance between how many bytes you save and the compression time.

Because if you think about when you do compression, you might say, well, I want to compress it the maximum I can because then the transmission time will be lower.

But if the compression time becomes higher, some of those two things might be actually larger and you would have lost the benefit of the lower transmission time in the compression time.

So we looked at what the balance was to give you the overall best latency.

That was Brotli level four for us.

So that was something we've had for a while. But we've added a new feature, which is that if your origin does Brotli compression, and this can be particularly good for static assets because you go to the maximum level 11, you might compress everything Brotli 11 on your server.

And we will support that end to end.

So in the cases where we don't need to do decompression to pass through our system of the response, which is very often, we will allow the Brotli compression level 11 to go all the way through.

So this gives you an additional speed up.

And we talk about how the system operates, the situations in which it won't operate, how to set up your server.

But the compression ratios are really, really good.

I mean, Brotli level 11 is really very, very powerful. It's just a little bit slow to do.

Hence, we're supporting it end to end from the server. There's a lot about implementation, introducing compression rules.

It's a deep dive also into this area if you want to understand a little bit more about Brotli.

Yep, yep.

And as this book says, we're also looking at ZStandard as well as a possible next compression algorithm.

And the other thing is Brotli uses a static dictionary of how it compresses, which was designed to be good for web content.

We have a vast amount of data on our network about what really gets transmitted on the Internet.

And so we're looking at how we might provide different dictionaries. And again, we'll be working with the group on that to look at how we might bring Brotli further by changing the compression dictionary.

Exactly. And compression rules are available now.

Yep, you can specify any compression rules. You can say, I want this compressed like that.

And we will do it for you. And to end Brotli is going to be rolled out over the coming weeks.

Yep, yep. More compression. We also have, before we go, a new tool feature approach in Cloudflare Radar, right?

We do. That's getting announced today.

I think that's not quite ready to go yet, but this will be a sneak preview.

You'll be able to find this when it comes out, which is Internet quality.

So we're looking at, we wrote a few weeks ago about measuring the quality of Internet connections, not just in terms of bandwidth, which is what everybody commonly thinks about, but also in terms of latency and jitter.

Jitter being how much the latency changes over time.

And we can do this with You can test your own connection.

You can find out the Internet quality index for it, which will give you an idea of these multiple measures, right?

How fast do we download?

How fast can you upload? What's the latency like? And what's the jitter like?

And those things affect how your Internet connection works depending on the application you're doing, right?

So if you're browsing the web, you probably can cope with a lower amount of bandwidth and a bit of latency.

If you're doing real -time video, you probably want the jitter to be really low, so you just get a very, very clean connection.

Again, if you're playing online gaming, you probably want the latency to be really low so that you're in the game as much as possible.

So there's a new Internet quality page, which will allow you to look at country level and autonomous system, i.e.

network level, and will give you data about what bandwidth we're seeing, the latency we're seeing, how we're seeing those networks, right?

So you should be able to dive in there and do some comparisons around the world and understand what download speeds are like, what latency is like.

And do it in individual network level.

You can look at your own ISP, for example, and see what they offer.

Can we go into the page itself and show it? I don't know if it's actually online yet.

I think we might be too late. It is. It is. It is. All right. All right. Here we go.

Let's do it. So worldwide perspective, you have the continents. You can see bandwidth by continent, latency, and DNS response.

And this part shows you also third party targets, not only call fairs.

And the part downstairs here, it's the connection quality.

It's aggregated call fair speed test results. So only no third party, previous 90 days.

And you have here the perspective worldwide. You can scroll and see download speeds, upload speeds, latency.

When I was looking at this, I was seeing Mongolia like an island here in terms of good performance in terms of latency.

Yeah. And I talked with Tom Paseka from our network team, and he was explaining to me that Mongolia has...

Everyone lives in one city, at least in terms of percentage.

So it's very much the populations concentrated in one part of the country that helps data centers nearby to provide better latency here.


Very, very interesting. So you can look at this. I think if you scroll down, you can also in the rest of the page, I mean, there's a bunch of other data.

And when you go into an individual, this is the little graph I was looking at. So if you look at this, there's an interesting little sort of this thing, which is...

And this is a way we're going to be representing these four axes, right? Download, upload, latency, and jitter, which is you can see...

You can compare different networks and see how those characteristics look.

So this is the global look, and you can see on the front.

You can then look at bandwidth we've observed, latency we've observed.

And it's really interesting if you look at the individual ISP, because sometimes you can see on the tests, the different plans they offer show up.

Because you'll see there's a whole bunch of bandwidth. Like here, for example, this is a particular ASN in France, and you can see they must be offering a plan that's around 300 megabits per second, right?

And there's another peak around 100 megabits per second.

And they must have a very fast offering that's getting up to about one gigabit per second.

So you can actually see a lot of data about how networks are operating on these graphs, what the latency looks like.

And it's interesting to dig in.

So hopefully this will improve people's understanding of different networks around the world, different countries around the world, the sorts of access they get.

And you can compare your own Internet connection to the best in your country, for example.

True. For me, it's amazing. You get a sense of what to search and your country and things to explore.

It's quite amazing.

So blog post today. By the time this is out, it'll be out. And this will be also available.

Yep. Actually, I think we still have time for one more thing, which is a step-by-step approach on how can someone change.

Here it is. Step-by-step guide to transfer domains to Cloudflare.

Right. So Cloudflare offers a domain registrar at cost, i.e., we don't charge any more than we are charged by ICANN for the domains.

And you can transfer your domains into Cloudflare.

We put this up particularly because Google announced that it's getting out of the domain registration business.

If people want to go to a different registrar, you can.

And this is just an absolutely step -by-step, this is how you do it, every step of what you need to know if you want to transfer a domain.

Because transferring a domain can be a little bit of a scary thing, right?

It's a little bit changing your mobile phone number to a different provider.

It's a little bit sacred, a domain name, like a phone number. And so this tells you everything you need to know about how to do it.

It is not complicated. I know there's a lot of steps here, but this is laid out in real, real detail.

So you should be able to transfer over and hopefully people will find our at-cost domain registrar something very attractive.

True. And to your point, we've got a lot of requests on social media.

Even our customer support got, hey, how do I transfer my domain?

And here is the explanation. It's quite easy. Most of these steps are to cover all of the possibilities.

And we also got a lot of requests about .dev.

And we announced here that .dev and .app will be provided since mid-July.

We're working on that right now. And the Google Domains part will close in September, I think.

So there's still time for this. Yes. And I think that's it. Should we mention another one of these blog posts that we didn't mention so many things?

There are so many. I think we should let people wait until Monday when they can read the Speed Week wrap-up blog and they'll know all about it.

Exactly. A lot of things.

Global distributed AI and translation update. The typical network performance update in terms of Speed Week.

There's a lot of here also in terms of countries where we are faster, in a sense.

Amazing things to explore, I think. Lots to read.

Yes. And here is the orange. It's an orange world. Exactly. And you just have to go to this page that sums up everything., speed-week. There you go.

It's all there. It's all there. This was a busy segment in a busy week.

A few more blog posts next week, so we'll cover that in the next program. Exactly.

See you, Joel. That's a wrap. You

Thumbnail image for video "This Week in Net"

This Week in Net
Tune in for weekly updates on the latest news at Cloudflare and across the Internet. Check back regularly for updates. Also available as an audio podcast!
Watch more episodes