💻 What Launched Today 2022
Presented by: Dawn Parzych, James Snell, Rob Sutter
Subscribe to Developer Week
Start at Â
Originally aired on May 22, 2023 @ 12:30 PM - 1:00 PM EDT
Join Cloudflare Director of Product Marketing Dawn Parzych, Principal Systems Engineer James Snell, and Engineering Manager Rob Sutter to learn more about what launched today for Developer Week.
Read the blog post:
- Welcome to the Supercloud (and Developer Week 2022)
- The road to a more standards-compliant Workers API
- Build applications of any size on Cloudflare with the Queues open beta
- Cloudflare Workers scale too well and broke our infrastructure, so we are rebuilding it on Workers
Visit the Developer Week Hub for every announcement and CFTV episode — check back all week for more!
English
Developer Week
Transcript (Beta)
Hello and welcome to what launched today in Developer Week. I am Dawn Parzych.
I am the director of Product Marketing for the Developer Platform and I'm here with two of our blog authors today to share with you what launched.
So, Rob, can you take a moment and introduce yourself, please?
Sure.
Thanks, Dawn. Hi, everybody.
I'm Rob Sutter. I'm the engineering manager for Cloudflare Qs, which launched its open beta today.
Great.
And James, how about you? Yeah.
Yeah. I'm James Snell.
I'm on the Workers runtime team and talk about all of the web platform standard APIs that we've been working on.
It's very exciting.
So, Rob, I'm going to start with you.
We announced Queues as a private beta a whopping six weeks ago and seems like it's been pretty busy because today you've known that we've gone from that private beta into an open beta.
Tell us a little bit about that journey.
Sure.
Well, it sort of speaks to what Queues are. Queues are a fundamental primitive for building in the cloud, and that cuts both ways.
It means that you really have to get it right when you launch them, but it also means that they're pretty well understood.
And the core feature set that you need to really build applications on them is relatively small.
So when we launched the closed beta, we were looking for user feedback mainly around ways that we couldn't think to use Queues internally.
You know, you think you know how something is going to be used, but you never do until you get it in the hands of builders around the world and they really start applying it to their own problems.
So we had some issues come up that, our private beta participants were really fundamental in both discovering and helping us solve.
But really, it was a smooth overall process that gave us a lot of confidence to go ahead and open queues up to the world today.
Great.
I'm going to take a moment here and say, I know it may seem weird that we're bundling Queues and then like the WinterCG work, but I see a common thread between these two, especially when we're talking about feedback.
So, James, you've been very involved with the Winter.
Can you share what that is and what was announced today?
Right, right.
So WinterCG is a community group that was launched last spring as part, you know, under the W3C process there.
And the whole purpose of it is to bring together implementers from all kind of non web browser JavaScript runtime.
So environments like Workers or node or Deno or Bun, you know, bring these implementers together to focus on web platform APIs and ensuring that we're all implementing them in a consistent way.
So what we've been working on since that, since it's launching that is really identifying what that common set of APIs or web platform APIs are and making sure that the various platforms are actually conforming to those standards.
And what we announced today is that Workers as a platform has implemented all of those standard APIs now, and we've really advanced the ball forward on being compliant with those specs.
So the thread that I see that's common here is we very much.
Appreciate and encourage feedback.
We develop publicly in ways I don't see other companies doing, whether that's with our private beta as an open beta as the participation that we do in open standards groups.
We're really transparent about that. So when we sit there and say in our blogs, we want to hear from you, we truly mean that.
We do want to hear what people are saying, what they're building, what they need so that we can make the products better, either from open sourcing things, adopting standards, or what we build into our own products.
Absolutely the is as far as Workers is concerned.
For instance, you know, just recently we did launch worker DX, which is the open source version of the runtime.
And all of our implementation of these Web platform APIs is part of that.
It's all open source.
So our streams implementation and an abort signal, an abort controller, I mean all these APIs are they're out in the open and that's where we're going to continue to develop our support is part of that open source.
Rob, going back to Queues, so we've taken all this feedback, we found some additional use cases, saw some blockers that people might have.
How do you see this as Queues helping developers and what people can build on Cloudflare.
So there's usually three main reasons why people adopt a queue, and one of the most relevant here is as a connector between services.
So we introduced service bindings where you can make work or work or calls, but now with a queue in place, you've got the ability to buffer those calls, you've got the ability to drop them, pick them back up later, the timings can be decoupled.
It also simplifies your deployment where you don't have to deploy what we call the producer worker, which is the one that puts messages onto the queue along with the consumer worker.
Those can happen in different pipelines.
They can be tested differently.
They can be developed and written by different teams.
But the queue is there as the loose coupling mechanism to hold them together.
When Cloudflare Workers was first launched, the use cases were intentionally smaller.
We launched to provide a better experience on a small set of use cases.
Then we grow from there. This is really one of those step functions or see changes in the amount of things that you can build with Workers because on both sides of your queue is a worker, it really allows you to build entire line of business back room applications on workers in a way that you might that might have been a little more challenging before.
And I'm going to put you on the spot here and ask you a question that I saw on Twitter this morning.
I love the spot.
Hit me. So we talk about building workers with Queues, but somebody responded to that and asked, that Queues sounds great, but can you use queues with other cloud providers or as queues something that can only be used if you're building within the Cloudflare ecosystem?
Right.
So there's two answers to this one today to put a message on a queue or to get a message off of a queue, you have to use the Cloudflare Worker.
In both cases, that is part of the open beta.
It allows us to discover customers, developers, builders.
What other ways would you like to get messages into and out of your queues?
So that's something that we're looking at in building already.
But even with the worker on either side use case, it's a really good starting point for either migrating from a region based cloud provider or migrating from an on premises solution in your own data center because queues are very low risk.
If you have an application that's built with an alternative queue today, you can put messages into both queues at the same time, measure what's happening, take messages out of both queues and compare it.
It's durable data.
And when you put a message into a queue because it's backed by a durable object, it is persistent to disk in multiple locations.
We're aiming for 100% durability guarantees, but it's ephemeral data.
This is not data that you're going to keep for the lifetime of your application.
You're keeping it until you have confirmed or acknowledged that you've processed it and that you're done with it.
And so if you come on to queues today with the open beta and keep your existing solution and do both at the same time and say, okay, it almost meets my needs, but not quite yet, please, please give us that feedback on what the not quite yet is and you can just turn off Queues and, until it's ready.
So a very low risk way to dip one toe into the cloud waters and begin moving your applications into the cloud is just to use your service like this.
The final point I'd make on that, queues are typically used for long running tasks or for batching up tasks.
So latency isn't really a key consideration, meaning that it's okay if you're making a round trip from your data center to your nearest Cloudflare pop and back, because ultimately you're batching these things up in two minute, ten minute hour long batches, what's a couple of hundred milliseconds on there?
So again, just a very low risk way to get started if you've been.
Considering it is a great place to jump in.
And for people wondering how to get in touch with us to share this information.
CloudflareDev is our Twitter handle strictly for developer tools.
We also have the Cloudflare, which is our corporate handle that talks about all of our products, not just the developer focused one.
And we also have the developers discord.
So you can join that and share information with us that way.
So.
Open data. We throw a lot of terms around alphas, betas, private, open.
What does an open beta mean?
Can people use this in production?
Is this more for that?
Like that testing piece that you were referring to?
How do you see people in an open beta using this?
So let me talk about the closed beta first and what that means.
The closed beta is, hey, we made this thing, we think it's ready, and then we put it in developers hands and they say, Did you consider calling a non Cloudflare service from your consumer?
And we say, Whoops, and we fix it.
So closed beta is really the aspect of, okay, we think this thing is ready, test it out for us.
The open beta is this thing is ready for a set of use cases. You test it out and see if it's ready for your use case or not.
And if not, let us know how to build it.
Right.
So a lot of the difference between the two, a lot of what this engineering team has been working on for the past few weeks is around monitoring and getting more information to operationalizing and ensuring the stability.
Again, if you read the blog post, because we're built on durable objects, we got a lot of stuff for free, like durability of data, strong, strongly consistent transactions and other things like that that we didn't have to build around.
That's why you see it as such a quick path to the open beta in terms of appetite for using something in production.
Two answers to this: one, everyone is going to have their own assessment process for determining if something is ready for them to use.
So some users, this is ready for their production data, some it isn't.
We have a lot of Cloudflare internal service teams that are talking to us about putting their applications on Queues for production before it goes to general availability.
So it's really a question of does it address your specific needs?
Great.
So when I switch over to James for a little bit, Rob, a lot of what you were talking about to me in my mind sits under this umbrella of developer experience.
We're doing things to make things easier for developers, to eliminate friction, eliminate challenges, headaches, all of the things that people don't want to worry about, like we're going to take care of that.
So I love a lot of these things under developer experience.
James, to me, like the work being done by the WinterCG group is also under that umbrella of developer experience because we're looking at making things standardize and allowing people not that we want people moving their software that they've built off of a Cloudflare, but if they want to move off of Cloudflare to make it easier or to move it on, like how do you see that kind of plane in the realm of the WinterCG work?
No, it's critical.
You know, building around those open standard APIs is is is something that the browsers have gotten us right for for quite a long time.
You know, and on kind of the non browser runtimes when node was pretty much the only option you had.
Right.
You know relying on those node specific APIs, nonstandard things, you know, just very proprietary specific to that platform.
You know, that was it was okay.
But you know, now that we have work Workers and Deno and all these other environments there, it just doesn't make any sense to to rely on just individual platform specific API.
So for us, you know, rolling out new APIs is, you know, if there is a standard, we need to be using a standard first rather than trying to introduce something that's platform specific, you know, for exactly the reason you're saying somebody wants to move, they should be able to without having to go through a lot of trouble.
Now, at the same time, you know, there is a huge part of the ecosystem out on NPM that is written to those platform specific APIs, and we're going to be looking at that to, you know, and how to support those better.
And there but as much as possible, if we can move developers and encourage them to say, hey, you know, use a readable stream instead of a node stream, right.
That's that's going to benefit everybody.
Yeah.
I think with WinterCG, something that might have caught people a little bit off guard when we announced it is we're basically working with competitive products and competitive companies on doing this.
Like we're working together to make the Internet better, to make sure that things are easier for developers.
I'm going to put you on the spot just like I did.
Rob, but something I found on Twitter this morning, I found it interesting when I was searching for like, Hey, what are people saying about WinterCG right now?
Somebody posted a very interesting comment thread that the largest disagreement that's existed so far amongst the community members is on the use of the word edge.
Yeah.
This is the fun conversation. It's happening right now.
So we in a number of developer tools, we use these simple identifiers to, you know, specify.
Okay, which runtime was a particular module designed to run on whether it's node or, you know, whatever.
So that just to make it easier when folks are building these developer tools, right, they can look at a module, know, kind of get an idea of where it should be expected to run and that kind of thing.
Well, these simple identifiers, you know, some in some cases are very specific.
Like we have one that says Node that represents Jess.
There's another DNA.
It's very specific what it what it represents.
Well, one of the identifiers that's being proposed is edge runtime.
It's like I mean, it's a very generic term, but it's the term that Versal uses for their for their runtime environment.
It's called Virtual Edge runtime.
So it's like, you know, is it too generic?
Is it going to cause confusion?
It you know, if you look at it just in terms of what Universal is asking and what they're building and that kind of thing, it's it identifies what they've done.
Exactly.
But, you know, it's not a trademarked term. So it's kind of there's a disagreement about it being too generic.
But so far, that's really the biggest disagreement we've had thus far because, I mean, so far it's like everyone agrees.
Yes, these building these standards is important.
You know, building conforming to these standards is important.
Nobody disagrees on that.
It's it's always going to come.
And like the little details and naming is always a difficult problem.
So naming is hard without a doubt.
What are the hardest things?
Yep.
So with. Literacy.
So there were two pieces to the announcement you mentioned. One was this is what the Standards Committee is doing.
But the second piece is looking more internally at Cloudflare and how we've made either everything is either compliant or nearly compliant.
Am I getting that correct?
Getting there?
Yeah. So how much work was involved and how long until we get fully compliant on these?
It's been a lot of work.
So, I've been with the team just over a year.
A year now.
And when I came on, we started taking a look at it. And yes, we had implemented we had implementations of all of these standards or, you know, quite a few of them, but none of them were even close to being compliant to the specs.
So it kind of acted like it a little bit. But, you know, there were a bunch of variations.
So I've spent pretty much the better part of the past year updating these, introducing a few new APIs like a board control or a board signal and URL pattern, you know, kind of fill in gaps and then going through and basically auditing the existing implementations of things like streams and getting them up to spec.
And it's just a lot of work.
The stream spectrum in particular is one of the most complicated and standard specs out there that, you know, that we could implement, you know, and we started going through and making sure that it actually is compliant.
At the same time, we've actually made some decisions of intentionally not being compliant in a few in a few different areas.
So, you know, we can't say that we're 100% there because we've intentionally deviated in a few places for like, you know, performance optimization or, you know, controlling memory, you know, things that are specific to the Workers environment.
So I don't I can't say that we'll ever be ever be 100% there. But, you know, it's taking almost on a case by case basis, looking at the individual APIs and deciding when making it a very intentional decision when we're not going to be compliant and being able to communicate why instead of it just being, oh, that's a bug.
We weren't we didn't know we weren't compliant.
So when we see the Workers runtime is compliant that apply to both our production workers as well as worker RD the open source.
Yep.
Yep.
All of the work is in worker to our production workers is builds on Worker DX. So anytime I'm doing any work in streams or any of these APIs, Worker DX is where that, that work happens first and then it gets pulled into the production runtime.
Okay.
The next question for both of you, I'll start with you, James, but Rob, you can get ready for this because it's coming to you as well.
What's next?
So what's next for WinterCG and these open standards?
So one of the most important API platform APIs that are out there is Fetch.
This is basically just, you know, it's really simple.
It's how you do a an HTTP request to a back end and all the runtimes have implementations of that right now.
But the way that the spec is written currently, it really it really assumes that you're running in a browser environment.
There's a lot of stuff in the standard that is specific to web browsers that is will never be relevant to an environment like Workers or Node or Deno, you know, that are running on the server side or these, you know, non browser environments.
So what WinterCG is working on now is starting to create a subset of the Fetch specification that it will be specific to these other environments and it will basically line out the things that aren't relevant that these platforms do not have to worry about implementing.
And once we have that identified, then with, with Workers, our next task is to go in, make sure that our implementation conforms to that more limited fetch profile and that we're compliant with that.
So that's one of the key things that we have coming up.
There's likely to be a few other bits and pieces, you know, on, on internal internally that we're going to work on compliance and making sure there's a set of tests called the Web platform.
Tests that kind of go through is the standardized set of tests to verify that you're compliant.
We're going to be integrating those more, that kind of thing.
- Yeah, so... Okay, great.
Rob, same question. What's next for Queues?
I'm going to give a non answer and then I'm going to give an answer.
The non answer is you tell us.
There's a lot of directions that we can take Queues in particular, and a lot of them, to use a developer's favorite word, are orthogonal.
We have to make different trade offs for different ones and so we really want to help the most developers ship faster.
That's how we're prioritizing these decisions.
Now there are some things in flight already.
Partial batch acknowledgment is one thing that's already been asked about on the Discord.
Make sure you're in the discord and that's when you have five messages. Three of them succeed, two of them fail.
The current model and this is common with queuing systems, is to retry the entire batch of five.
That's not particularly helpful because three of them were good.
You could pass those three along and only retry the two that have failed.
That's partial batch acknowledgment.
So that's coming.
Some other stuff that has been asked for. Well, what are the trade offs we have to decide is which is more important, a FIFO guarantee for first in, first out, strict ordering or improving the scale.
Do you just need more messages in?
You will handle the rest of it.
That's not an us call, that's a developer call.
So it really depends on your workloads.
Can you architecture workloads around, I always pronounce this word wrong, feel free to laugh at me, Idempotency or do you prefer for us to handle it right?
Is it more developers or is it build faster? So that's going to be one that we look at and then always, always, always exposing more information.
So monitoring metrics, what's the status of your queue? How many messages are in flight?
Can I see them?
Can I add messages into the queue from the dashboard? These types of developer experience improvements are really close at hand for us and some of the things that you should expect to see first out the door during the beta.
Great.
I don't always like talking about this, but I know people want to know. Can you share anything about pricing on queues?
Is there free usage in open beta?
What is the pricing model look like?
Sure.
Yeah, there is. So the pricing model is extremely simple.
There's the first and I'm going to get this wrong.
I believe it's first million messages are free and that's for the lifetime of your use of the service per month.
Beyond that, it's $0.40 per million.
I'm sorry, I said messages, operations.
Beyond that, it's $0.40 per million operations and operation is one of a read, write or delete.
So a successful message passing through on the happy path is three operations.
You put it on to the queue, you read it off the process, it, you delete it from the queue because it was successful.
There's no currently no charge for data retention.
We limit that to one day.
There is never going to be any charge for data egress.
In this case, it's only egress into a worker.
So that makes a lot of sense.
But we're also figuring out how to make that true as we add on the ability to poll for messages from outside systems as well.
Excellent.
So we've got about 5 minutes left. I always like to ask people, is there anything I didn't ask that you really want people to know?
Shameless plugs are allowed here.
What are the parting words you would like to leave people with?
James, I'll start with you.
Yeah.
So with the with the standard stuff, the biggest thing that we have and the biggest thing we need are use cases.
We need to understand how people are using them.
And what we found is it's always the edge cases.
The people that are using things in unusual ways is where we find kind of where we are, where we may not be compliance or where we need to make some improvements.
You know, I love talking to people about those edge cases and what they're doing with the API.
So, you know.
That's always something, you know, folks who reach out.
You know, it's like, you know, for me, you know, getting onto our GitHub tracker, you know, with worker the issues there, it's just in Cloudflare's GitHub organization.
You can find it there easily.
Having conversations there about, you know, how are you using these APIs?
And if you found some weird edge case that doesn't quite work that way.
Let's talk about it and let's get it fixed.
Great, Rob.
Parting words, shameless plug.
Yeah, absolutely.
That whichever side is on.
We live and die by your feedback.
So let us know which thing we should be building.
Let us know on Twitter by email in the Discord.
All of those links are in the launch blog post, so I would definitely echo that.
I also want to echo one of our senior directors of product Rita, who this morning tweeted about how now you have compute messaging and durable storage primitives and there really is a better together story here.
It's no accident that the launch blog post shows you how to aggregate, writes and to R2 from a worker.
Right.
You're starting to see this unified developer experience that I don't think the competition does as well as we do.
And that's all about getting your software out there as fast as possible.
So just really consider how you can re architect or architect from the ground up to build with all of these together and look out for more of that building together story as we continue to grow.
That's great.
You're like, I just did a segment with Rita just prior to this one, and we kind of threw a challenge out there.
We were talking about how this week, in addition to all of our product announcements, we're highlighting a lot of how we've built things using the platform, how our customers are building things on us.
And we went through and said like we've got use cases and stories from lots of people on every single product except queues because Cubes was like it was six weeks between like the announcement and where we are now with Birthday Week.
So we threw a challenge out there.
Build stuff using cubes, share with us what you've built so that we can highlight it on our future Innovation Weeks.
One of the channels in our discord that I spend a lot of time in is a what I built channel.
So people are constantly sharing like their GitHub repositories or their sites.
We have a built with Workers section on our website as well. We really do love highlighting all of these stories on Twitter, on our website, on our blogs.
So go build something and show us what you've built by the end of the week.
Maybe not the end of the week. I'll give you a little bit more time, but James Robb, I really appreciate you taking the time to kick off Developer Week with us and share your announcements and insights into what's happening.
Thank you all very much.
Thank you.
Thank you.
May.