💻 Developer Week: Product Chat on Workers Unbound GA, Google AMP, and WebSockets
Presented by: Jen Vaccaro, Nancy Gao, Ashcon Partovi, Kristian Freeman
Originally aired on January 4, 2022 @ 1:00 PM - 2:00 PM EST
Join the Cloudflare Product Team to discuss everything new that we announced today for Developer Week!
Read the blog posts:
English
Developer Week
JAMstack
Transcript (Beta)
Hello, everyone. Thank you for joining us here on Cloudflare TV. This is our third day of Developer Week.
And we're really excited to talk through some of our product announcements today.
We're really focused on just expanding the workers platform today and everything that we're now enabling our users to be able to build on workers from extended CPU limits to now being able to build on WebSockets and also improving the Google AMP experience by using Cloudflare Workers.
So we're really excited to talk through those today and explore everything that we've announced.
I'm Jen Vaccaro.
I'm the Product Marketing Manager for Workers and Cloudflare Pages. And I'm joined here today with Nancy Gao, Kristian Freeman and Ashcon Partovi, all part of our Cloudflare Workers and Pages team.
So I'll give them a second to introduce themselves.
And Nancy, why don't you start by introducing yourself? Sounds good.
Thank you so much for the intro, Jen. Hi, everyone. I'm Nancy. I am Product Manager for one of the Product Managers on Cloudflare Workers.
And I primarily work on Workers Unbound.
I'm really excited to be part of this week today. We've been working really hard over the last couple of months to try to get all of our features ready and to tell the story to developers that I think everyone wants to hear.
So yeah, I'm excited to show you a little bit about what we've built so far and also hear a little bit about what Kristian and Ashcon have to share.
Hey, everyone.
I'm Kristian, Developer Advocate for Workers and Pages. And I'll be here talking about our WebSockets announcement, which I am super, super excited about.
And hopefully I'll get to show off some code today as well.
Hey, everyone. I'm Ashcon. I'm our Product Manager for Workers Developer Experience.
And I'll also be talking a little bit about WebSockets today.
Great. Thank you for introducing yourselves.
So we'll get started today talking on Unbound. One thing that I will tell the audience here, if you do have questions, feel free to drop them in live.
There should be a spot to do so right where you're looking under our faces.
So please drop those in.
We'll either answer some of them as we go or we can save them to the end.
But definitely feel free to make this interactive based on your questions.
So first, I want to get started talking with Nancy about Unbound. This has been a long anticipated announcement.
We put this product, Workers Unbound, in beta back in Serverless Week last summer.
And you and I were both new to the team. And now after a lot of users getting on the beta, we've implemented Workers Unbound for GA.
So why don't you give us a little bit of a background first to start? Like, what is Workers Unbound and how does it differ from what we've traditionally offered our customers on Workers?
Yeah, happy to talk about that a little bit. So when I think about Workers Unbound and how it came to be, sort of how it was conceived, I think it's nice to kind of explain the history of the Cloudflare Workers platform overall.
So this predates when I joined the company. But back in 2017, we developed an arm company, which is called Emerging Technologies and Incubation.
I think at the time it was called Product Strategy. There was an effort to build a serverless compute platform that was supposed to, at the time, intended to tackle complementary use cases to what I think are like more the flagship Cloudflare products, being like CDN, WAF, and those types of things.
So, yeah, a couple of engineers spun out and built this new initiative to allow users to run serverless compute, essentially.
And as we've seen it grow over time, I think it's been really cool to watch use cases get more and more complex.
I mean, people really started out doing, as I kind of described, those complementary, like modifying headers or like modifying or making custom rules for CDN, things like that.
And to now, into today, we're in 2021. It's very cool to see that there's a lot of application developers coming onto the platform.
And they have different resource requirements, of course, than our flagship customers do.
So as we've been like digging into their use cases more and trying to understand what it is that they need, increasingly, we were seeing from both external customers and internal users of workers that people just need more resources.
They need more compute time. We also have heard that they need more memory, larger scripts, more scripts, like pretty much resources across the board.
And that's kind of why we decided to release Unbound.
I mean, we had done it at increased limits for different customers on an ad hoc basis.
When they contact us, we tended to raise those limits. But today we're making that available to the general audience and to the wider public.
And hopefully they will find it as useful as our beta testers and co-workers have.
Great, great.
So workers Unbound is going to be up to 30 milliseconds, right, for the GA?
More than that, Jen. It's going up to 30 seconds for our GA. So we were originally, to give you a bit of background, our free tier limits are at 10 milliseconds.
The workers bundled plan is at 50 milliseconds. We're GAing up to 30 seconds a day and we're also offering a private beta for up to 15 minutes, which is orders of magnitude, like longer than we were before.
And I think it's going to be exciting to see what kind of long compute people are going to run on it.
I think it's important to explain one quick nuance about the product, is that we're still going to be supporting and maintaining an ongoing bundled offering, which is that 50 millisecond time that is intended, I think, more for enterprise customers who have workers perhaps as an add-on to other products.
And for application type developers who need longer running workers, the Unbound option is available.
And if you are a blended customer, we have several of those. You can actually toggle between the two products on a per-script level.
So you don't need two accounts.
It can all be configured through one account, through your workers paid plan, and you can access Unbound and Bundled all in the same place.
Sounds great.
So between the 30 second part that we put into GA today and then the 15 minute that we have now in beta, what do you see as sort of the main use cases that one would use for, let's say, the 30 second versus the 15 minute?
Yeah, I think the 15 minute workers are very interesting use cases.
It tends to be the longer tail.
I'll actually tease a little bit about what Christian and Ashton are going to talk about later, which is WebSockets, and that pairs really well with long running workers.
But from the customers that we've seen, the long running workers tend to be really heavy data operations, heavy computation that could be either mathematical or computational or bioinformatics in nature.
We've seen a couple of those.
And the 30 seconds, I think it really captures more the fat part of the bell curve.
And we see a lot of just general application developers, web developers, people who build everything under the sun from SEO to e-commerce to just general extensions of first customers.
So I would say the 15 minute ones tend to be more specialized and a little bit more niche, and the fat part of the curve falls under the 30 seconds range.
But hopefully, we can serve everyone. That sounds great. And can you explain a little bit on what's happening behind the scenes on the 15 minute portion, like how they're triggered by HTTP response requests, and just share a little bit there on kind of what that part actually means for the 15 minute section?
Yes. The process of designing and ready engineering spec for long running workers is very interesting.
And we came up with a kind of a dual approach, like one approach for 30 second workers and a different one for 15 minute ones.
And for 30 second workers, which are going to be powered by HTTP and the fetch API tends to be a little bit more latency sensitive.
Whereas if you are running a 15 minute job, chances are you're not sitting at the receiving end waiting for the response.
So those are actually going to be powered by cron triggers.
And for both spectrum, one of the main engineering hurdles that we had to think about was thread saturation.
If your worker is running close to the cap and is going to be CPU bound, we don't want to risk overloading our cores with too much CPU.
So we've come up with a lot of interesting engineering techniques to try to mitigate the risk of thread saturation.
We're doing dynamic deprioritization and we're trying to mitigate those.
So that's part of the reason why it's taken us this many months to get us to GA is trying to make sure that our core is really resilient and can handle the load that we expect to be coming soon.
That sounds great. And I know we have a lot of interesting customer use cases that we launched during, well, they used the beta, but we were able to include them in some of the blog posts.
I'm wondering if you can share a little bit about some of those use cases.
I know we have Invitae, Thomas Reuters, and I.
Could you share a little bit about what each of those customers are doing?
Yeah, for sure. I guess to start with the bioinformatics case, which is a genomics specialist that we have been working with over the last couple of months.
He actually wrote a blog about the personal project that he's building, which tries to pair serverless technology with traditional genomics tools.
Many of them are open source. So the use case that he has is simulating and streaming genomic data that pulls from the 10,000 Genomes Project, or 1,000 Genomes Project, which is, I think, the initiative out of, I think it was the University of Cambridge, to map 1,000 people's genomes and make those publicly available for folks to do computation and research on.
And yeah, he compiles WGSim, which is a specialized tool, into WebAssembly using BioASM, sorry, and streams it back to the user.
So these computations tend to take a long time.
These are massive data sets. We've come into a couple of different challenges.
One piece of interesting feedback from him is that across the board, genomics analysis tools tend to need higher memory limits.
That's something that's interesting for us to know if we ever want to serve that type of research computation in the future.
But we were able to work together to chunk it out to stay within the memory limits.
And we have a cool demo that you can see, actually, on the blog.
So if you check out the Workers Unbound GA blog, there's a link to his work there.
And let's see. Another interesting use case that we can talk about is Thomson Reuters.
They actually use Workers Unbound to build their Evercache.
And the use case there is super interesting as well. Thomson Reuters, of course, being the large size company that it is, it manages individual websites for thousands and thousands of small businesses, in this case, lawyers across the U.S.
And for those individual sites, they might not have that much individual traffic.
But still, it's a high priority for them to make sure that their SEO is really optimized.
And having cache status is one of the factors that goes into SEO rankings.
So Thomson Reuters built an Evercache. It's a permanent cache for all of their legal sites using the Unbound service, which is quite cool and helps promote those small businesses.
Yeah, I think across the board, it's cool to see these individual use cases.
They are so different from each other. And yeah, I think what it shows us is like a general purpose compute platform.
It could be really anything.
I am surprised every single day when I read another email from a different person in a different corner of the industry.
So yeah, it's been a lot of fun.
That's great. And one of the things that was very striking to me with the Invitae Genomics DNA sequencing post is that he talked about our pricing as being you don't need to have a degree in quantum physics to understand our pricing, which is definitely something we get a lot from customers coming from like AWS Lambda, these other providers that have more complex pricing models and have like these different hidden costs.
So I'm wondering if you can talk a little bit about the pricing perspective.
When we launched in beta for Unbound, we were already highly competitive.
I think it was 75 percent more cost competitive than AWS Lambda and similarly for some of the other serverless providers.
But since then, I know that you talked about in your blog a lot of pricing updates that even make that story all the better for our users.
So can you give us a little overview on what's changed and what's new from a pricing perspective?
For sure. So I know that pricing is a really important factor that developers have to consider and kind of our principle when designing the pricing model for Unbound is like if you have to squint to understand the model or if you rack your brain to compute the price at the end of the month, then that's a no-go.
So we tried to make sure that our costs are A, low, but B, at a glance, you can understand them and understand that they are competitive.
So I can actually share the screen to explain some of the changes. This is just pulled directly from our blog post.
So some of the new things that are noteworthy about the pricing model for Unbound is that we decided to increase the allotted usage that a customer gets.
For their $5 a month, we've decided to add more resources.
So for $5 a month, you get an allotment of bundled requests as well as unbounded requests which are those long -running workers.
We've also increased the usage in terms of duration and egress.
So that's included in your $5 charge per month.
And the other great thing that I think that we did in terms of making this product more affordable is we decided to cut our egress costs in half from $0.9 to $0.4, $0.4.5 per gigabyte.
And this is really based on all the feedback that we heard that people who had built their business models on top of egress as being one of the variable costs that they are very concerned about told us over many channels and we decided to incorporate it in here.
So I'm excited to unveil these changes.
I know that it'll give people peace of mind, especially if they're running those types of intense computation like being able to know that your costs are going to be able to stay in check, I think is really important.
Yes, and for the audience tuning in, I think most know what egress is, but maybe for those who need a little refresher, can you just remind them on what egress is and how that will directly affect their bill?
Yes. So we followed industry standard definition of egress as being inbound or outbound data transfer.
So if you look into the docs, we have a diagram that kind of explains how it behaves, but any inbound or outbound data transfer are within the cloud ecosystem.
Sounds great. And also, can you speak a little bit to some of the hidden costs that a lot of our competitors may charge for, but that we are not?
Can you tell us a little bit about that? Yeah, two of the main ones that actually kind of called out here in the blogger API gateway and cloud front, which are charged for operation and can get kind of expensive.
Another one, I think that is important to notice that the quoted prices that you see on some of our competitor websites, whether it's Lambda or others, are actually based quotes for their cheapest region, which is US East.
And if you need to harness the powers of edge and you need your compute to go somewhere else, then those rates can actually be significantly higher.
And these are, I think, factors that you might not see upon first glance.
But once you actually start using the service and you are putting effort into migrating your services onto one of the platforms, it starts to pop up over time.
And at that point, it can be difficult to switch.
So we try to be as transparent as possible, like really what you see is what you get.
And it's all in that chart right there. So yeah. And that is something that we've heard a lot.
And we talked during serverless week when Unbound went into beta, was that the idea in the market for a long time was that if you were scaling and using serverless, like it could become very, very costly.
And so this is really going after that notion and debunking it.
It's like you can build these complex workloads that take a lot of CPU time, but still not break the bank while you're doing it.
And so while we're on this topic, one thing I know we've gotten asked before in the past, it's like, how is it that we're able to charge such cost competitive pricing and still offer so many of these important industry-leading features?
So if you can explain a little bit about that, like how is it that we enable such competitive pricing?
Yeah. Well, I actually think it goes back to company-level principles.
And we actually hear this all the time inside the product team, that the developer hierarchy of needs starts with ease of use, but also includes cost and performance.
So if you neglect those two factors, you will lose your developers to other services and other providers.
So I think from the outset, just by our design and our motivation and trying to serve what people need, we knew that we needed to price things competitively to make them accessible to all, even if that means changing perhaps the margins of the product.
I think that was a trade-off that we were willing and happy to make to be able to reach more people.
Yeah. And also, we're running on V8 isolates, right? So the spin-up time, you don't have to pull in the whole container runtime, or you're not paying for having to keep that container warm and things like that.
So part of our architecture is also a bit nuanced here as well.
Right, right. That's true.
And sort of in that vein, another question we've heard a lot is, how will Unbound now compare with one of our big competitors like Lambda?
So we talked a little bit about pricing, but I'm wondering if you can share any other sort of ideas or specifics here on how we might compare to someone like Lambda.
Well, I think that there are so many advantages that we have when compared to other services.
Actually, I might pass the buck over to Ashton. Two things that we can touch on is, one, we ran some performance benchmarks and saw that there was a substantial difference there.
So speed is definitely top of mind. And the other one is just developer experience.
Every time I try to onboard onto Lambda just to check out a new feature or see what's happening there, the onboarding experience and just the headache to choose between four or five load balancers and which database do I need, it can be quite challenging to figure out there.
So just the amount of friction that it takes to onboard onto workers versus onto competitors, I think, can't be discounted.
Ashton, do you have anything you want to chime in there? Yeah, I think another important thing that might be good for people to keep in the back of their minds is we have a lot of other new future projects that really depended on the framework of Unbound.
We have plenty for this developer week.
Developers should be really excited about the new potential features, triggers, different things that we can bring to the workers platform now that we have these higher execution limits.
That's great. And so, Nancy, I'm also wondering if, as we sort of wrap up on the Unbound piece, if you could share how folks can get started by using Unbound, both with what's available today and then the 15-minute portion.
Yes, I'm happy to do that. Let me go back and share my screen again.
Let's see.
Well, I guess I'll start by talking about the private beta, which you can actually access through the blog post.
So if you see the form at the bottom, that's something that you can get there.
And in terms of turning on Workers Unbound, it really is as easy as going into the dashboard, which I have correctly displayed this time.
And let's see.
We're on my second personal account because both my work account and my primary personal have been so overloaded with testing that we'll use this here.
So it's easy as just making sure that you get onto the workers paid plan, which is, as we talked about before, $5 a month.
I haven't enabled already, but let me change the default usage model.
Allows me to switch between bundled and unbound. So here, let's say I'll change the usage model to unbound.
And yeah, so it's very accessible in the dashboard.
Once you create a worker, it'll be defaulted to that long-running execution time.
We'll look at one of my test workers that I have here.
And yeah, we have new analytics that are going to be included here that you can see.
And so you can monitor the status of your metrics. And if you want to change the setting of that particular worker back, you can do that here as well.
So bundled to unbound and unbound back to bundled.
Nice. I think I remember when you did that example you were showing.
It was a site for your dog or something. Oh, yeah.
I did make a static site for my dog. That's one good memory there. Very, very fun.
Wonderful. Anything else, Nancy, you want to highlight about unbound or Christian, Ashkahn?
I know you've also been part of the team on this that you want to highlight before we get into the next topic.
I'm excited about it.
There's a lot of people that have been asking when. So I'm excited that we're keeping on keeping on.
We kept on keeping on. That's for sure. I guess one thing that I would add is that we love hearing from unbound beta testers or people that use the product.
So if folks want to join the Discord, we have an unbound channel and it's pretty active.
And honestly, it's really fun to be able to hear about what people are working on.
It's a great place to get your questions answered and get some quick feedback.
So hopefully people use that more. Great, great.
That is a good point. Folks can hop on to our Discord server and our Twitter account at CloudflareDev.
We're always posting a lot of what's new with unbound or with workers in general.
So that's a really good source of information. All right.
Well, thanks, Nancy, for highlighting everything here on unbound. We'll see if some questions come in as well on unbound.
And then I'd love to switch over to WebSockets.
So that's a post that Christian wrote this morning that we just published.
Ashkahn's also been helpful with the docs. So I'll hand it over to them.
Maybe if you both could get started maybe giving us an overview on exactly what WebSockets are, what they unlock for our customers, and just giving us a little walkthrough of today's announcement.
Yes, absolutely. Yeah, so basically, I guess I'll share my screen a little bit later and show the blog post.
So there was an announcement that went out today, as well as documentation and a new template showcasing WebSocket support in workers.
So it turns out it's kind of been in the runtime for a little bit.
Ashkahn will speak a little bit more towards the durable object side of it, which is where this work kind of came out of initially.
But today, we're announcing pretty robust support for setting up a WebSocket server in workers.
And then some reference documentation, like I said, a template and things like that to show how you can connect a WebSocket client, whether that's like a browser or, jeez, I don't know.
I'm sure there's all kinds of things I can't even think of, like your games, your video games, I don't know.
Stuff like that, right?
Like for real-time communication from your clients to workers. So yeah, the blog post just links to the documentation.
So there's like reference docs on the WebSocket pair class, which is like how you set up a WebSocket in workers.
There's a template.
And then there's just some sort of higher level, like here's what you need to know about using WebSockets in workers and kind of what that whole flow looks like.
I'll also say that if you are interested, I'm sure we'll cover it a little bit here, but there was a video that aired earlier today on Cloudflare TV that I basically just walked through the template and showed how it worked in detail.
It's called announcing WebSockets for Cloudflare Workers. It's about half an hour or so.
And that is a great kind of companion piece to the blog post. I don't know, Ashkahn, did I miss anything that you could think of from kind of the initial thing there?
Yeah, and I'll tie it back to what I had mentioned earlier. WebSockets are a great example of a project that we really needed to wait until we had unbound and also durable objects.
And that's something we'll talk a little bit about soon, kind of the synergy between using both WebSockets and durable objects.
Yeah, maybe we should just get into that. I feel like maybe it's the right time.
Jen, how's that sound? Yeah, that sounds good. And maybe before we get too far into it, can you just give our viewers a little glimpse of exactly what durable objects are?
Yeah, I can give a quick recap. So durable objects are essentially allowing you the ability to address a specific worker.
And so that worker has an ID, it has a name.
And anywhere in the world, there exists only one of that worker.
And so that allows you to coordinate. And so I think a great example of this with WebSockets is a chat room.
So if you want to have a chat room, well, you have to have all the participants come into the same room.
And so prior to durable objects, this wasn't possible in workers.
But now that you have the ability to coordinate to a single worker, you can have all of those chat clients come to a single room, and you have a single worker script that is handling all of those requests in line.
That is so much better of a description than I would have given.
So that's excellent.
I feel like I literally just understood more about durable objects from hearing that explanation.
So thank you. That is awesome. Yeah, and practically, I think the reason that we have been kind of in the blog post and in the documentation stuff, like we really stress the relationship between WebSockets and durable objects is that WebSockets on their own are, it's cool.
Like it's interesting, right?
But practically, like in a real application, it's kind of useless.
Like the template that we have is kind of an example of like, so we open up a WebSocket connection from browser to the worker, and then you basically just like click on this button.
And then on the worker, like we increment this count. This is like, hey, you clicked three times, you clicked four times.
So like that's something, right?
It's okay. When you refresh the page, you lose all of that. And the reason for that is that WebSockets on their own and workers on their own don't have some sort of state mechanism built in.
And that's where durable objects comes in.
That is the stateful piece of this. And like Ashkahn said, when you can address single worker, you can then use, again, Ashkahn, if I go off the rails and explain something wrong, please tell me.
But the idea is like you can basically add storage and add consistency and state on top of your WebSockets.
So yeah, so Ashkahn mentioned, I don't know, maybe I should share my screen and just kind of show a link to this.
Let's see, which one do I want here? Can you see that okay? Maybe I'll make it a little bit wider too.
Yeah. So there's this example project here, the edge chat demo, which is just edge-chat.demo.Cloudflareworkers.com.
So this was kind of our initial durable objects example.
I don't know. Live demos scare me.
So I need to just find a room that I know no one will be in. Cloudflare TV Dev Week 2021.
Gonna be really scary if someone else has been in here. But I don't think they have.
So basically the idea here, kind of what Ashkahn was saying, is like we have this idea of a chat room.
This room corresponds to a single durable object that we can kind of interact with back and forth using WebSockets.
And so anyone who joins this room and writes messages will all be kind of in the same WebSocket, so to speak, and kind of exchanging messages and join and leave.
There's Ashkahn.
Yeah, I should have prepped you all to join this chat room with me.
Sorry, that was a little impromptu. And so again, this is something that would just not be possible without durable objects.
And with WebSockets, you then have this sort of real-time aspect as well where I'm not constantly making requests back and forth.
I just have this WebSocket connection that stays open and things come, I guess, up and down the pipes above my pay grade to understand how that all works.
But it works very well. So yeah, let me, while I'm already sharing my screen, I should show off a couple more things.
So this is the blog post that we've been talking about.
It has links to all of the documentation and then the template.
There's some example code in here as well that kind of summarizes how to set up a WebSocket in a worker, the things you need to know to say like, Hey, do you think you could walk through just a little bit of that code?
Oh, sure. Yeah. Let me do it from the template side and maybe that'll be a little bit clearer because I think these are like a little sort of truncated, I guess, is how I would describe them.
It's a very, very short thing. So yeah, I'll hop over to the GitHub template here in a sec if that works.
But I do want to share real quick. I'll just scroll down.
We'll cover all of this in a sec. But yeah, two things, like durable objects, we've already talked about.
Ashkahn covered really well. So if you want to understand the relationship here, you should read this section, Durable Objects and WebSockets.
There's a template here. And then the demo thing, I probably really need to blow this up.
So the idea here is like when you join this page, you connect to a WebSocket, you can send events up the WebSocket to the worker.
And then you'll get data coming back. You can simulate messages that are sort of like unknown error messages.
But then as we kind of mentioned, when I refresh this, everything is blown away, right?
So there's no concept of state. This is kind of why we need durable objects.
So yeah, let's take a look at the code.
I'm going to be constantly kind of blowing up text size because I'm sure everything is going to be too small.
So please feel free to let me know if I can make it larger.
Yeah, so this is, is that good? That's good. Okay. So basically, yeah, our WebSocket template, it's open source.
And I would say the code contributions or suggestions always appreciated.
I'm kind of new to WebSockets myself.
So if someone has feedback on like things that could be handled better, I'm always open to that.
I will not be upset about those contributions at all. The idea here basically is that we have sort of two things that we care about or need to do here.
So the first is setting up a WebSocket in the worker. So the WebSocket or the worker needs to kind of register as a WebSocket server and say like, hey, I'm going to be kind of facilitating this two -way communication.
And then it needs to send the WebSocket, the client WebSocket down that says like, hey, you know, if you're ready to start doing this WebSocket thing as a client, like here's, you know, the information you need to know.
For this template specifically, there's also a template which is just like a really straightforward way of returning HTML which will show how to do the client side of this.
So we'll hop into that in a little bit.
I don't know, how detailed should I get? I mean, I don't know how much time, like should I go line by line or should I do like, oh, you tell me, Jen.
I'll let you kind of take the pace.
I don't think you need to go line by line, but we still have like, you know, 20, 25 minutes.
And we are going to talk about the Google AMP story too at the end and we can bring it back to questions and like general chat amongst ourselves.
Let's say like a medium pace. How does that sound? Like a medium pace?
How does that sound? Medium pace. Okay, okay. Cool. Yeah, so like I said, basically two things that need to happen here.
You need to create the WebSocket and then the server needs to kind of accept it.
And then you need to send something down to the client.
So starting down here, we have this, well, I'll just say real quick, we have like this router stuff that is not super interesting.
It's the usual routing stuff. If you work with workers, I'm sure you've written this yourself or you've used a router library of some kind.
So I'm going to kind of skip over that.
You just need to know about this WebSocket handler function here.
So in order for a client to kind of indicate to the worker's function that it's going to start doing WebSocket stuff, basically, we're going to send this request up that has an upgrade header.
And we're going to check and make sure that it's set to the value WebSocket.
If it's not, we just return this kind of error here.
I think this is slightly out of date. Ash can correct me if I'm wrong, but there's like a specific status code that we need to return here, right?
So I think I need to go and update the template.
What is that? The general idea is good though. I think...
It's like a 426 or something? Yep. Yeah. Okay, so we'll come and fix that real quick.
Literally right after this. But the kind of big sort of new idea that you need to wrap your head around is this WebSocket pair.
This is a class that's now in the runtime.
And when you create a new WebSocket pair, it's going to, well, make a pair of WebSockets, right?
So one of those is for the client. And that is what we're going to send down to the client.
You can see we do that here by returning a new response.
We give it the status of 101, which is a special HTTP status, the switching protocol status that tells the browser to turn this basically into a WebSocket connection.
And then we also give the WebSocket itself. So we say WebSocket is this client WebSocket.
So we take one of those two WebSockets in the pair and give it to the client.
And then the other one we handle as kind of the server.
I know we're in serverless land, but let's just call it a server. It's okay.
It's, you know, the server WebSocket. And then what we do with that is we have this other function where we basically start doing, you know, our own custom stuff with the WebSockets.
You can think of all of this stuff as sort of initialization.
I don't know if you can see my cursor, but I'm like wildly swinging the cursor around.
So that's kind of initialization. And then in handle session, we actually say like, okay, let's start doing stuff with this.
The first thing we do is we say WebSocket.accept.
So again, this is like our server WebSocket. And we're saying to the worker runtime, we're gonna start doing stuff with WebSockets.
So anytime that sort of WebSocket looking information comes up the pipe via this WebSocket connection, it should be what we call like terminated inside of this worker, which means like basically, this is the simple version of that is like the worker should handle WebSocket data.
That's what we kind of mean by terminate.
So it doesn't go past the worker to somewhere else. It just kind of sits here and does its thing.
So definitely important to use WebSocket .accept.
And then everything else after this is just kind of traditional WebSocket work.
So we have these message events that come in. This will be when a client sends data up, right?
So, you know, maybe in my button example, I want to click, I'm gonna send up this click message.
And I can actually show you what that looks like. I'm gonna probably need to make this font size big as well.
Can you see that okay? I know it's probably like huge, but let me kind of size everything out here a little bit.
Where's the thing I want to drag? No, okay, I'll make it a little bit smaller. Can you see that okay?
Yeah, anyone? Yeah, okay. So if I click here, you can see that I have this sort of WebSocket.
This is the WebSocket connection. I have this little inspector for looking at these WebSocket requests.
I really want to make this even bigger, but I guess I can't.
That's okay. So basically we just send up this message, click.
And then you can see it has this up arrow. This is like, hey, we're sending this up the WebSocket.
And then what we're getting back down is this JSON payload that has this count and then this timestamp, right?
So what happens here in the code is that when I send a message up, right?
That was that up arrow. You can see that I kind of get it here in this callback function.
So this is on any message event and I can do things like inspect the data that's coming in there.
So if it's a data that's just a string value of click, I can do something like increment account here and then I can send data back.
And so that's what that down arrow in the inspector, that's this one here.
So I'm sending data back down the WebSocket with either this new count.
So this would be like, if I'm incrementing though, obviously if you're gonna build a real application, it probably should be more interesting than that as opposed to just a number that goes up by one every time.
Or I can do things like I can send different types. So maybe it's an error or whatever here, right?
This is your custom application. You can do whatever there. There's other events here, the close event.
So if a client closes the WebSocket connection, you can console log that stuff out.
You can do things like clean up when things, maybe in the chat example, for instance, you need to say like, hey, this person left the channel.
Okay, well then maybe I'm gonna go in, grab that data in the Drupal object and say like, Ashkahn is no longer in the chat.
Everyone else needs to be aware of that.
Let's send down a message that says like, Ashkahn has left the chat room or something like that.
So yeah, that's kind of the basics of the WebSocket server component.
The idea is like the high level stuff really that you need to remember, say you come back in three weeks ready to write a WebSocket and you don't wanna come back and find this video.
If I can leave you with like the kind of thing from the worker side you need to remember is first you wanna use this WebSocket pair class, which we do have documentation for.
And I don't know how much time I have, but I'll try and link there as well.
You have two WebSockets that come out of that.
One is a client, which needs to be sent back down to the browser, to the client, whatever your client is.
And then the other is your server.
And that's where you're going to set up these event listeners and just kind of do stuff, act on events that come from your WebSocket clients.
So I don't think that I, I mean, we can look at the template if you want to, which is kind of the HTML client side of it, though it's not super complicated and it's just a lot of HTML, frankly.
I don't know how much time do I have? I should get to the docs as well.
I think maybe we can go and share the docs and like some of the resources that we have.
Cool, yeah, that sounds good. Let's see, I must have linked this here.
Yes, I did. Okay, that's good. Yeah, so there's new documentation as well, right?
So using WebSockets is our sort of high-level learning page and that's just here in the learning section.
It kind of covers what I just talked about, basically, both from the server and from the client perspective.
So how do you set up a WebSocket server in Workers?
What do you need to know about all of that request header stuff that we talked about?
What do you need to know about setting up WebSocket pairs, et cetera, et cetera?
So all of that stuff. And then like, how do you return a response that tells the client you're going to start doing stuff with WebSockets?
That's all covered here. And then there's also more details on writing a WebSocket client, which is the part that we didn't cover in the template.
The spoiler alert is it's very, very similar.
The API is the same, basically, which is great.
So you can take a look at that. There's a link to the template as well.
And then there's also kind of some breadcrumbs here where it comes to like the durable objects aspect of it.
So how do we think of durable objects and WebSockets together as a sort of full solution for doing stateful coordination stuff?
You know, the like previously impossible, made possible inside of Workers, I guess you could say.
So that's our learning page. The other one is the WebSockets reference, or I guess runtime APIs page.
I kind of think of it as a reference section.
And that is like the specifics about, you know, the WebSocket pair class here.
How do we instantiate it? What are the kind of parameters for things like add event listener or accept?
There's all kinds of stuff here. And that'll help you kind of understand once you get past the basics of like, how do I even set this thing up?
A more practical or not practical, I guess like technical, how do I understand all the different things I can do with these WebSockets inside of this pair?
Yeah, I'm trying to think if there's anything else. You know, there's a couple updates to our response class.
Specifically, you can pass in a WebSocket here.
That's worth noting. Fun fact, I spent an hour and a half on the template not knowing what I was doing wrong and then realized that I had not put capital S WebSocket and I had put lowercase s WebSocket and I had to walk away from the computer for like an hour and another hour and a half because I was just like, oh, programming.
So that was unfortunate. But it is capital S WebSocket just so you don't have the same issue that I did.
Yeah, and I will say like the blog post has links to all of these as well.
So yeah, is there anything else I should show while I'm screen sharing?
I think that's great what you shared. Ashkan, do you want to add in anything there?
I know you also worked a lot on the docs and whatnot. Christian, you did a great job going over the WebSocket stuff.
I'll just say that you might notice that not all of the standard WebSocket methods are in there.
There's one or two that are missing.
So we'll work on eventually making sure that it's very similar to the WebSocket spec that the browsers implement.
Sounds great. And one thing I'll just add, so we've been talking a lot about durable objects.
Those are now in open beta.
So just keep that in mind. We're hoping to go into GA soon enough, but those are still in open beta, but you're able to sign up and get started with those today.
And one thing I wanted to talk about quickly here while we are still on the topic of WebSockets is Christian, in your blog, you had a section on pricing and how customers should think about that.
Maybe you can tell us a little bit about that in particular.
So I will do my best, but I feel as though I may make it more complicated if I go into a really deep dive into it.
I'll say that so basically the short version is like WebSockets are a long running connection, right?
So you are opening a connection up with a worker that you're going to keep open for a while and send data up and down, right?
That is kind of a different model than like short-lived polling sort of based requests.
And so basically, and I think that either Nancy or Ash can probably talk more about the pricing parts of it better than I could, but I'll give you the developer brain version of this, which is like if you're really interested in WebSockets and you want the pricing to remain in your favor, you really should be using WebSockets with DurableObjects because there are some, the way that we kind of handle DurableObjects is, I don't even know that they're like kind of the simplified way to describe it.
I mean, literally there's like 10 paragraphs about pricing stuff in here in this blog post.
I'll just say like you should really, you're gonna get a lot better pricing if you're using DurableObjects and WebSockets versus just like opening up a WebSocket that you know is going to be alive like forever because frankly that like costs us money too.
So I don't know. I'm not doing a good job of explaining it. Nancy or Ashkahn, anything you want to add there?
No, I think you're doing great. Just one thing I would add is that when you pair WebSockets with Workers Undowned, we thought a lot about the interaction of those products there and wanted to make sure the costs were under control.
So one nuance of our billing model is that we will only bill you up to the duration limit of our execution limit.
So for HTTP workers, that's 30 seconds worth of duration costs.
And for cron triggers, when we GA that, it'll be up to 15 minutes.
So that's one way that we tried to make sure that we're mitigating people's potential bills if they open a WebSocket and then leave that connection open for a long time.
One thing I think also that I believe is in progress, but I do think is really interesting that's worth mentioning is there's some stuff on the platform right now where we're thinking about proxying WebSocket connections.
So this would be like, say you have some other tool you're using that communicates via WebSockets, but you want to pass that through workers because there's a lot of cool things you could do, I think, around basically having a worker function sit in the middle of an active WebSocket connection and do any sort of like, I mean, an example I can think of is like authentication or like rate limiting or stuff like that.
I don't know, but honestly, I'm pretty new to WebSockets.
So I'm kind of wrapping my head around what that looks like. But like we mentioned in the blog posts, so basically in the future, like you shouldn't get charged for the duration time of running that or for proxying that WebSocket through.
That'll be really cool. I don't know what the ETA is for that, not that it's kind of, again, above my pay grade, I would say.
And when it comes to network things, but that should be really cool from like a developer perspective.
I think there's a lot of really interesting use cases that can be unlocked there.
Yeah, so I'm excited about that as well.
I'll just add one more thing onto just pricing we're generally kind of wrapping up what we talked about.
As Nancy said at the beginning, we want to keep the pricing model really simple for developers.
And one of the things with WebSockets we found was there's a good full of edge cases that we hadn't thought of fully, like as kind of mentioned, if you have a WebSocket that's just idling and not doing any computational work, what do we charge for that?
So we have a little bit of work to do there, but we are certainly listening to feedback, particularly that we're getting in Discord.
And if you have any suggestions or ideas, about what you would want to see in WebSocket pricing, definitely reach out to any of the product managers or the workers team, and we'd be happy to have a chat about it.
Great, great.
Well, I'm excited to see what our users start building with WebSockets and with durable objects and with Unbound.
So that's gonna be some game changers that we're enabling our customers, but really just developers in general of what they can build and what they can also build at a reasonable cost and not break the bank.
So I'm really excited to see how this develops. I do wanna spend the last few minutes, Nancy, you and I were gonna maybe share a little bit about the third announcement that we made today, which was highlighting using Workers plus Google AMP Optimizer for publishing webpages.
So maybe we can get started on that to wrap things up.
Sounds good, Jen. Shall we flip the format? I'll be the interviewer.
Can you tell me a little bit about what AMP is and some background on the product?
Yeah, yeah. So Google AMP is something that has come out of the Google team and it's an open source web components framework mainly used for publishers.
So this is targeting publishers to enable a good experience for their users.
And so basically like if an AMP page is surfaced on Google search or Bing, it will be served from an AMP cache, which will go ahead and optimize performance and have some other optimizations that we'll talk about here for the user.
So it's a collaboration at this point now between the Google AMP team and with workers that we were able to help solve some challenges that Google was seeing to improve the user experience and have even more performant and customized pages.
Yeah, for sure. I think it's really important to add that adopting AMP can actually help publishers achieve higher search rank status.
So it's like there's a careful and important ecosystem that is at play here.
I'm wondering if you can tell me about how AMP pages can still fail at core web vitals in some cases.
Yeah, so this was one of the reasons that the AMP team reached out to us is that even when you're using AMP pages to enhance your performance, enhance these different elements of your site, there's still areas where these core web vitals can still underperform.
And so the main reason for that is that these core web vitals, a lot of them can't be implemented client side.
So things like image optimization, fast server response times, effective font loading, those are all essential for user experience, but those need to be done on the server.
So you have to be able to optimize your AMP pages on the origin as well.
And so that's where the AMP optimizer comes in and that's where workers also comes in is that together we're able to make it super easy to serve optimized AMP pages on your own origin and eliminate some of those challenges that the Google team was still facing as far as the core web vital.
Cool. Can you tell me a bit more about how workers and AMP works together like under the hood?
Yeah. So basically whenever a request comes in for an HTML file, there's several steps that take place.
So first the worker will check in the global cache if there's an optimized version that has already been generated.
And then if that's not the case, the worker will request the version of the file from your origin and then optimize the document if it is using AMP.
And then it'll return that to the user.
And then only after the response has been fully streamed to the user will we start saving the generated version in the cache.
And so together that's using workers and Google AMP there.
It's just kicked off today.
So we'll see how our users start using it. But one of the things that if folks have the chance to read the blog is there is a section sort of towards the bottom of the blog where you can see a live demo example of like pages that are using Google AMP and then pages that are using the Google AMP optimizer like after we've implemented workers.
And you can see just how fast by a couple seconds faster that the page is able to render with all of the images and all of the page files.
Awesome.
And can you walk us through some of the example use cases that this partnership might enable?
Yeah. So the biggest ones, like I said, are this is usually used by publishers.
And so there's a lot of ways that this will help publishers out.
And so basically together with AMP and workers, they'll automatically preload a lot of the tags or external resources like fonts or hero images.
And this is going to help on several different layers. So one of them is increasing the responsiveness of the images out of the box.
And that's also by using the Cloudflare image optimization functionality.
And then integrating and looping us back to Unbound, there will be a lot of requests that can be handled very quickly and optimized quickly.
But the optimization of larger files that take more than the 50 milliseconds of CPU time is where workers Unbound is going to come in and play a really helpful role of delivering those heavier load files.
And then the last thing is going to be these performance gains.
And like I said, you can go into the blog and you can see the speed that is now enabled with your AMP pages.
And then the other thing that I will say is that the Google AMP team was having some challenges with delivering customized URLs.
So if you were using like an AMP page, you would have this like long URL and it wouldn't necessarily have like the name of the company.
Let's say you wanted like Cloudflare.com or whatnot to be in the URL.
Instead, it would have like this long like Google AMP generated URL. And by using workers, they were able to create the system where basically you cryptographically sign the request and you're able to then use that as having like keep the security of that link but also not have it be like this random URL.
And so by using workers, we're also able to improve the customer and user experience in that way and keep your customized URL and not have like this random one.
Cool. And is there a way that if any publishers adopt the workers AMP solution, they can reach out to us?
Like do we have a Discord or maybe we should set one up actually?
You know, I'm not sure if we have a Discord particularly on this, but Christian, you would be able to help guide us if we need to set one up there.
Discord.gg slash Cloudflare dev is the invite URL. You're talking about making a channel for this stuff?
Yeah, yeah. We're kind of in channel overwhelm right now.
If I want to take you behind the scenes in Discord moderator land, I feel like we are maybe it's just me.
I'm overwhelmed by the amount of channels that we have, but I think that we would gladly make one for people if they needed a place to kind of get help here.
But yeah, Discord.gg slash Cloudflare dev plugging the invite URL one more time.
Please join us. Nice. Great. So we have just a few more minutes here before we will wrap up.
Anything here that Nancy, Christian, Ashkan, you want to share or anything you're really excited about?
Just kind of looking into the future of what's going to be possible to build with workers.
One thing that I did just think of as I was annoyingly plugging that invite URL a couple times that I think is probably worth mentioning here because I do feel like some of the announcements today have been like kind of complex and maybe people need time to sort of formulate questions and stuff like that.
After this session is every day this week, and I guess I should invite all of you to join us here.
That is like Jen, Nancy, and Ashkan if you would like to join.
We do a kind of happy hour thing.
We have been every day this week. It's kind of clubhouse-y style in our Discord.
So it's like an audio stage where we can invite people on stage to ask questions and stuff like that.
So if people have more questions, I think that's a really great place to do that.
That's in half an hour on our Discord. So if you all are free, I would love to have you stop by.
But yeah, if people have questions, Albert and Luke and I from the workers team at least we'll be there to answer questions and stuff like that.
So sorry to take up valuable time. So that's it. That's my other plug.
The cool thing there as well is hopefully we'll be having some of our speakers from earlier this week from external guests.
We had a whole developer speaker series and a lot of them have been invited too today.
So if people have questions for them that they've been harboring or anything like that, this will be a good forum to ask.
Cool. All right. So we have just a few more seconds. So I'll wrap us up.
Thank you, Nancy, Christian, Ashkan for all of your work during this week.
I'm really excited to see what our developers do with workers with all these announcements and stay tuned for the rest of the week.
We have some really exciting things coming up.
All right. Bye everyone.