⚡️ Speed Week Spotlight: Observatory & Experiments
Presented by: Sam Marsh, Matthew Bullock
Originally aired on July 24, 2023 @ 8:00 AM - 8:30 AM EDT
Welcome to Cloudflare Speed Week 2023!
Speed Week 2023 is a week-long series of new product announcements and events, from June 19 to 23, that are dedicated to demonstrating the performance and speed related impact of our products and how they enhance customer experience.
Tune in all week for more news, announcements, and thought-provoking discussions!
Read the blog post:
- Faster website, more customers: Cloudflare Observatory can help your business grow
- How to use Cloudflare Observatory for performance experiments
Visit the Speed Week Hub for every announcement and CFTV episode — check back all week for more!
English
Speed Week
Transcript (Beta)
Hello, everybody. Welcome back to the second Cloudflare TV session of the day. Earlier today, we were joined by John, William and Lucas discussing timing insights, HTTP 3 prioritization and something called Interactions Next Paint, which is going to be on the new Core Web Vitals next year.
I highly recommend after this broadcast finishes, you go over to the Cloudflare blog and give those posts a read if you haven't already.
Today, however, and in this session specifically, we are talking all about Observatory, which is our new application performance home at Cloudflare, which launched today.
And to do that, discuss Observatory and something called Experiments, which is going to be a fast follow.
I'm joined by Matt Bullock, one of our PMs.
Matt, can you introduce yourself, talk a little bit about what your role is at Cloudflare?
And then, yeah, give us an overview of what we've announced today, what we've shipped today.
Yeah, so hi, everyone. I'm Matt Bullock and I'm Product Manager for Speed.
And today, yeah, we're going to be talking about Cloudflare Observatory.
So this is the new home for how you can monitor your performance of websites, understand the real world, like the real user measurements, and just a whole load of feedback and understanding of how your website's performing and what you can do to improve that.
So, yeah. Nice. Yeah. And that went live about four hours ago now.
I know we've been super keen to kind of get this out for weeks now, if not more.
Can you kind of give me an overview of how it's different to kind of what was there before, you know, what was known as a speed test and kind of how this differs to that?
What does this give people that they didn't have before?
Yeah, sure. So I think the original speed test or the speed tab, when you joined Cloudflare and had a new zone, it would run a speed test and it would give you a sort of how fast your site is and how fast it would be with Cloudflare with a number of features enabled.
And pretty much that was it. So you could be brand new on your Cloudflare journey and it's like your site was this and you could be this.
And 24 hours later, you could make a change to your site. It didn't repeat that.
You didn't understand, like, did that have any impact? Am I still as fast?
Am I slower? Am I faster? So there was all these questions. And when we were talking to customers, again, in feedback, it's like, rather than just having a point in time, it will be great to understand constantly how my site's performing.
If I'm making a change, what impact has that had? Has it been positive? Has it been negative from that?
So all of this feedback, we created this whole new observatory speed tab to understand that and build a load of tools around it for you to use.
And in terms of, let's say, in terms of the tests, A, being able to do that repeatedly.
But secondly, we can now do that from multiple locations, right? Can you kind of talk a little bit around that and some of those locations and how you set those up and so forth?
So, yeah. So the first thing we did, so originally we were using a web page test, it's sort of an architecture to run the tests originally.
That's what you saw in the speed test dashboard. That was just running in a home in America in one of our Cloudflare data centers.
So for people that had customers in America, that was great to get a snapshot, but we're both like, well, in London or close to London, we have customers all over the world and they don't have many customers in the US.
It wasn't great. So a big piece of feedback was I would like to run speed tests or understand speed for customers or how they experience my website in the geos I care about.
So when we were looking at redesigning Observatory, we moved from web page test to Google Lighthouse.
So Google Lighthouse has really become the industry standard of how you measure performance for a website.
They have web.dev, it's great for a load of insights, information of how they measure performance and what the sort of tools do.
But in essence, a Lighthouse test gives you a score.
It's really easy to understand. It's between 1 and 100.
And the higher the score, the better. So similar to when you went had an exam, the higher the score, the better you did on that.
So that was sort of, it's really easy to understand.
And it also gives you outputs of what's good and what's bad and what you can improve upon.
So real simple and easy to understand performance for the masses.
I could probably give my report to my dad, he'd be able to understand if it was good or bad.
It's probably a long shot, but you know, it's probably better than sort of seeing a slideshow slash sort of where things are running and loading.
So with that, we're able to almost just create an image of the Lighthouse test, it's open source and deploy that around the globe, where customers are able to choose such as say Singapore or Sydney or Japan, wherever they have customers and they're closest to, they can run the synthetic Lighthouse tests from those locations.
And rather than just, you can run ad hoc, which is like, just give me a test now and a result, or you can schedule it to run daily.
So if you are pro and above, you can run it daily. On our free plans, you also are able to schedule a test weekly as well, something you couldn't do on the original speed test.
So you're constantly able to get results and track them and run multiple tests from different regions on different paths around the globe.
So that's kind of what we classify as synthetic tests, right?
So that lets you enter a URL, run a test ad hoc, like you say, or kind of scheduled and get a score effectively, right?
So that would tell you, like you say, from Singapore, you might get 88, from some location in America, you might get 97.
And from South America, you may get like 30.
And that will highlight where you may need to kind of improve or where you may have poor user experience.
But the other side of it, which we haven't really touched on yet is RUM data, real user monitoring, measurement, metric, everyone's got a different M in RUM.
But can you talk to us about how RUM fits into Observatory as well?
Yeah. So there's two parts of measuring web page speed, website speed.
And that is obviously synthetic, like these lighthouse tests to get your score.
But the other big one is your real users, like how, like when a user goes to a site, like what do they experience?
And again, Google put a way of sort of putting, so we use real user measurements, which give us scores, or it feeds back from the browser of how quick a website loads, but different things.
So how quick was the biggest image to load on a page? Or how quick was it before it became interactive, and they could pick around a site rather than just seeing a white screen.
And because those, when you enable real user measurements or monitoring, where your traffic and users are, those are the users reporting back.
So you get some real-time information across all of like how your site's performing to those users.
And again, you get different metrics and different scores and areas of how you can improve or what you should improve to make your site better.
So you're measuring at a synthetic level, which is like, you know, you're in a lab and measuring, and you're also surveying the real users and getting feedback there.
So combining both gives you a really real strong insight of how your website's performing, yeah, in a lab and in the real world.
Yeah, and I think that's really important, isn't it, is this is not just now we ran a test from a data center which has got incredible connection to the Internet, and it's probably situated potentially even on the same block as where your hosting is.
This is going to be taking RUM data, which we already collect, to be fair, we already collect that information in web analytics, taking that, augmenting that with the Lighthouse data, and putting it all into a single screen that just says, hey, look, this is how the world perceives your website, and this is how slow or fast it may be.
And we try and do that as easy as possible. But I think the main thing for me when we worked in Observatory was the the so what factor, right?
Because there are so many tools out there and products out there that kind of give you a result from a test, whether it's a security test or speed test that we're talking about here.
And they just kind of say things are bad, but there's no so what there. So can you talk a little bit more around how we're trying to basically close that loop or how we have closed that loop now from observing the performance to actually improving and putting it all into the same screen effectively in Observatory?
Yeah, and it's simple.
So the big part of Observatory was, yes, give the monitoring, but we have a whole, I don't know, layer of different tools and features that we're constantly releasing.
Like in this innovation week, we keep talking and releasing new features.
And it's if how do we marry a bad sort of result or a bad aspect of a test such as, I don't know, enable next generation image formats?
Okay, that's I need to do that.
Like that's a result you would get from running that test anywhere.
But PowerFlare offers polish and WebP, it also offers image resizing.
Those two go hand in hand. So we've built a new recommendations engine and we're constantly going to add to it and update.
I've already had feature requests going, oh, it'd be great if it could recommend this product for this scenario.
But at that level, it will break down all of your individual results, show you red is bad, yellow is amber, green is good, which is, you know, simple again from the lighthouse test or run, and then give you a recommendation or features that you can enable to improve that performance.
So you're no longer going away going, oh, I've got this bad thing.
Like I have to go and Google, how do I fix it in my origin or what do I do?
Cloudflare's got the tools and the capabilities.
And we can just say, use this tool, click this button, enable it. And then you run your test again and you'll see the result improve.
And then in the coming sort of quarters, we are going to make that so you can run a test just ad hoc, like with it in the background.
So it doesn't, you don't have to sit on in production and then it will show you the exact saving that you're going to get from running those tests.
So it's really just, yeah, marrying all of our products and giving you an understanding of like, you probably, I struggle to read all of our blog posts.
So coming into the dashboard and seeing, oh, Argo helps with this or use a cache rule to improve caching.
I don't have to know about the blog posts. I don't have to read the dev docs to understand we have cache rules.
Cloudflare's recommending we have cache rules.
I can click that, go through to the cache rules settings, and then learn more about it, deploy a cache rule and improve it again.
And if you're scheduling tests, it will just run the test tomorrow. You'll get the result as it improved, or you can just run another ad hoc or you get your real user data again.
So it's constantly feeding, giving you information, providing recommendations, you enable, you test again, and it's just a constant cycle.
So we've built that all in one place in one part of the dashboard.
Yeah. And it's a really great way for us to, like you say, to surface these new products.
So we've got just today that there's Argo for UDP, there's H3 prioritization, the smart hints yesterday.
There are so many of these features we're shipping, which are effectively one button speed improvements, right?
But you still need to know where they are and you still need to kind of go and find them, be interested in finding them.
So us being able to say, hey, you've got an issue with LCP, you've got an issue with whatever, click here and we'll make it better, is the best possible go to market, I think, for any product manager out there.
But I think one of the ideas we've got for next essentially observatory or the kind of next pieces to work on is around the simulations, right?
In terms of what if you turn this on? So not so much as saying your LCP was poor, click this product, which is built for LCP improvement or kind of generally gives LCP improvement, but instead give customers the ability or even do it for customers and say, did you know, if you turned on these features, you would see a minimum X percent improvement or whatever the score may be.
So can you kind of talk through maybe some of your thoughts on what we might work on next for observatory and what kind of users might kind of expect to see in kind of the coming quarters?
So definitely that part where we want people to be able to test all of the products and more products and understand how they can impact them.
Run it on a subset of traffic, so A-B test, but doing it in Cloudflare without you needing to configure cookies or understand results and being able to pull that in.
Tagging products into RUM, so being able to say that this user had, like for say you did 10%, had polish enabled and you can then sort of segment your results to see polished results versus non-polished.
Is it worth turning on?
Is it worth not? So I think that's a definite part of what we're looking at and what we want to do and giving you timing and insights and taking out the legwork for you to understand.
Is this a risk to deploy or is it going to work and safely do that?
And today, I know another blog post we talked about was experiments, which is sort of, I think the next aspect of what we want to do in Observatory and sort of using Cloudflare's power of combining products together.
So yeah, I think experiments as well.
So yeah, before we go into experiments then, so for Observatory, how do people get hold of this?
Presumably it's in the dashboard for everybody.
And one question that kind of comes up with everything we put in these blogs, which I like to cover, is who gets access to this?
And what are the differentiators basically between free probe is and what's the difference basically?
Yeah, so it's available, everyone in the dashboard now. It was live, I think a few hours ago.
So if you go into your dashboard and click on speed on the left column, you'll see Observatory there, click in, and then it will be asking you to input a URL for you to run your first test.
And again, a big difference is we only used to test on the root domain.
You could test on blog.domain .com.
So we could have a test running for blog.Cloudflare, a test for Cloudflare, and even for specific paths, if you were to sort of segment on paths.
So yeah, that's where it is, go in and schedule your first test.
And we wanted it to be enabled for everyone.
So everybody has some flavor of Observatory. So the free plan, you're able to schedule one test, you're able to run a number of synthetic tests.
We only have one location there, but as soon as you jump to the pro plan, all of the locations open up.
So you can test across all of the globe, run, so all of that, and then you have more tests.
And the only real difference is the higher the plan type, the more ad hoc tests and scheduled tests you can run on that.
The run part is available to everyone.
So everyone can enable run. So if you're on a free plan and you want to understand how your traffic is impacted globally, then you're able to see that on the run map and understand real user measurements of that.
So you don't just have to rely on the ad hoc.
And then obviously the product mappings are done to your plan.
So you're able to see sort of what products would help you.
If there is an upgrade, like IE polish is only available in the pro plan, it will say that, you know, it will alert you to say you can get a better, use this product, but you may need to upgrade on that.
So we've really sort of just allowed everyone to test, wanting to give it to everyone and just sort of give this, you know, give back to everyone and make it easy.
Yeah, nice. So if there's two call to actions there, there's A, go and turn on run if you haven't already, it's free.
And there's no reason really not to do that.
If you want to get insight into performance, you should just go and do that today.
And B, Observatory's there, it's in the dashboard, start using it, play around and let Matt know what you think.
Yeah, there is a feedback form at the top that sort of any feedback, feature requests, I think it's the last question, please fill it out.
I read everyone and sort of, yeah, it will definitely sort of, we're looking for the next things to build or add to.
Yeah, good point, good point. And I know you mentioned experiments. So, and obviously we've got a blog post talking about it.
So I'd be remiss if we didn't cover that, cover that here.
So can you just explain to us what experiments is and kind of what the idea was behind it?
So for the, so we, when you look and you search performance website improvements, there's a whole lot of blogs, real smart people telling you how to improve your or what you can do to improve your website.
It's a mixture of SEO people.
It's a mixture of people that care about web performance.
And a lot of it was that we saw Cloudflare runs workers and they, you know, they give you JavaScript examples of like, if you do this, then your fonts, and we have them on our Cloudflare developer stocks.
If you run this bit of code, then the Google fonts will come from your Cloudflare domain.
There's proof that if you just have one domain serving all requests, that is faster.
So you get better performance of your site because it loads quicker.
That's great. If you understand JavaScript, if you understand workers and configure workers, we wanted to make everything simple as possible.
And the other beauty is that I'm also product manager for another part, FL, which I'll be talking more of a few things later this week, but we are currently working on the implementation of Cloudflare snippets and snippets are designed, they're a small bit of JavaScript.
So if you are familiar with workers and which is running code at the, in all of our data centers, closer to eyeballs, snippets is built on top of workers in the first party worker ecosystem.
And it will allow you to run small bits of JavaScript on, could be on a specific path or a host name or with certain details using our Edge Rules engine functionality.
So we've got that product. We know people are creating JavaScript scripts to improve performance.
Let's just marry them both and allow you to trigger a test to see like, I found this on randomdomain.com.
I want to put it on my site, but I don't want to break production.
Let me just run a servitory test, a lighthouse test on this piece of JavaScript.
Does my site load? I doesn't 403 or 401 or break.
And is it faster? And if it's faster, great. Let me deploy that. Let me deploy that as a snippet and snippets are going to be sort of, there's a fair usage policy behind them, but really it's just, everyone's going to be able to use them with no additional cost.
Deploy that and you get performance benefits.
So it's really sort of marrying two products together, allowing people to find useful bits of JavaScript and they don't have to be a developer, i.e.
me, or go to chat GPT and put in, please write me a piece of JavaScript to do this.
Take that, copy paste in, run your test.
This is with it, without, this is with the snippet running.
It's faster. You see the scores improved, hit deploy, job done, go and find your next bit of code.
So again, sort of that constant evolution, giving informed decisions of like trying to make your site faster and just trying to make it easy for everyone and accessible to everyone.
Yeah. And like you say, it's a safer way to do it, right?
Because you don't have to, I mean, traditionally you would find some, let's say some code online or kind of just going around on Twitter that purports to say, Hey, this is going to make your website faster.
Hey, this is going to do whatever it may be.
But until you just take that code, put it on your website and hope for the best, generally you don't know what's going to happen there.
It could be slower, could break it, like you say. So I think the selling point here is, A, it's safer, but B, it kind of gamifies performance a little bit, right?
You said at the top where you've got Lighthouse scores as an exam score.
We want to be better. Let's go from 82 to 84 kind of thing. Being able to kind of find these optimizations, put them in there and see, does it move the needle?
Does it go from 83 to 95 magically?
It suddenly gives you that way to play safely with performance without thinking like, if we get this wrong, we might take our entire business down at peak hours or whenever it may be.
So I think that's the real, really interesting part of it.
In terms of obviously next steps here, what's the takeaway for experiments?
Obviously this is not available to use right now. How can kind of people get involved and help build this?
Yeah, so totally. So on the blog post, there is a sign up that will once, you know, register there, get your email address, etc.
Once we are ready to release this, we'll be in touch and sort of allow you in and into the sort of closed beta to test this.
Yeah, it's going to require snippets, but that's moving very quick and hopefully have an announcement around that soon.
But yeah, this is something we're going to be working on in the coming quarters and for everyone to use.
Nice, perfect. I think that wraps up our session today on observatory experiments and a bunch of stuff in between.
Thanks for joining, Matt. Thanks for talking about those things, those areas.
And that's all for Cloudflare TV today. I'm going to go and have a drink and sit down for tomorrow.
And I will see you all back for another session on what we're going to launch and what we are blogging about on Wednesday.
So until then, I will see you all later.
Thanks again, Matt. Thanks. Thanks, Sam. Bye.