💻 Developer Week: Product Chat on Supporting Node.js, Enhancing Workers Dev Experience, and Database Partnerships
Join the Cloudflare Product Team to discuss everything new that we announced today for Developer Week!
Read the blog posts:
Hi everyone, thanks so much for tuning in. We've had a lot of exciting announcements this week for Developer Week and I'm really excited to be here today with some folks from our product team to talk about everything we announced.
I'm Jen Vaccaro, I'm the Cloudflare Workers and Cloudflare Pages PMM and I'm joined here today with Abhi, Greg, Albert and Ashcon.
I'll give each of them a second just to introduce themselves and then we'll get started talking today about our new database partnerships, our journey to Node.js support and some updates we have on developer experience for workers.
So Abhi, do you want to kick us off and just introduce yourself?
Hi everyone, this is Abhi Das. I work for Strategic Partnerships team and specifically I work on a couple of partnerships for Database Cloud today.
Greg. Hey, I'm Greg McKeon. I work on PM on the workers team working on distributed data which includes durable objects and KB.
I can go next.
I'm Ashcon. I'm the product manager for workers developer experience that includes things like Wrangler and the dashboard.
Hi Albert, I'm the workers community manager and I just talk to users all day.
So today we're going to get started by talking about what we launched with our database partnerships.
So just to kick us off here, Greg, can you share a little bit about some of the historic challenges that we've seen between running compute at the edge and also needing to store state?
And if you can walk us through some of those historic challenges and then talk about what we're doing here that's different and how some of these partnerships will enable our users to store state at the edge.
So I think it's actually helpful to sort of go through an overview of what workers are and how they differ from maybe a traditional serverful, I guess, environment and then a serverless environment.
So in traditional application architecture, you have your server that's running your application and then generally you have a database that's really close next to it.
And sometimes there's a one-to-one mapping between those, right?
Like you have one application server and one database, or you have a few application servers, some small number of them, and they all use the database for coordination whenever they need to access state.
And so what that means is you have many, many, many more application instances than you used to.
We have customers who are running billions or billions of workers on top of us.
And so when you're in that world, the initial workers world that we were living in, everything was stateless because there was no way to coordinate across these millions of requests that might be active at once.
And it also sort of fundamentally shifts the actual architecture because you can't have millions of connections going into a single database, right?
Most databases can't handle that kind of load. So there's been this sort of tension and the way we've historically resolved it is by building our own products, right?
So what we've done is we've built out workers KB, which is a globally distributed, eventually consistent data store.
And the idea there is basically to distribute all of the requests from all these different workers across different instances of KB.
And that brings with it eventual consistency, right?
If you happen to talk to a cache view of KB that is outdated or hasn't seen some of one of the latest changes, you'll see stale data.
And so for some applications, that's not good enough.
For some, it works fine. For some, it's not good enough.
They need strong consistency guarantees. And that's what motivated us to build out durable objects.
So if you think about the Cloudflare workers environment as whenever a request comes in our edge, we'll process it wherever it lands in that actual colo on that metal.
So there may be millions of instances of your application running.
The durable objects world, we actually will run the given durable object in just one place.
And then any worker that needs to talk to that specific durable object can forward a request and it'll be handled by one single durable object across the whole world.
And that gives you back some of the ordering guarantees and strong consistency guarantees that people typically use a database for, right?
So if I want to coordinate between multiple application servers, I'll send a request to the database and say, hey, store this value.
And the database will decide which request gets stored and in what order.
And in what order people view the data.
And so that's where you can think about durable objects fitting in.
And so with that, we've actually provided a strongly consistent storage API as well that you can access.
It doesn't include querying or anything like that.
It's a key value API. But it sort of gives you this intermediary step where then you could forward a request on to your own API servers that you have running today, or you could forward it on to some downstream database or to even a database partner like we're announcing.
So that's kind of our two products that we're both using.
And we both have really exciting roadmaps. We've built out some great stuff there.
We have a bunch of customers who are happily using them. But then today, I think, you know, we've announced a few partnerships to help broaden that ecosystem and really add the number of use cases, add to the number of use cases that workers can address.
That's great. And maybe before we jump into some of those partnerships, can you explain a little bit on like what are some of the example use cases that KV and durable objects are really well suited for?
And then we can kind of talk about some other use cases that maybe would be well suited for some of these other partnerships.
Yeah, so KV is really well suited when your data changes infrequently.
So really low latency reads and really does well when you don't hit the eventual consistency issue, which is when you don't change a key value pair very frequently.
So great examples of that are like configuration data.
So if you have to store some large table of like routes, for example, or authorization tokens or things like this that generally are written to once per key and then read frequently, that's where KV really excels.
On the other hand, for durable objects, we're seeing people build entire applications on top of them.
We're seeing people build games. You can, as we announced earlier this week, you can terminate WebSockets in a durable object.
So we're seeing people build game lobbies and chat applications.
One of the really cool use cases for durable objects is sort of layering them into your existing stack.
So like I mentioned, you have your own API server.
You have a durable object in between your workers and that API server that can cache requests and return cache responses from your API.
It also lets you layer on collaboration, real-time collaboration features.
So if you wanted to have a real -time chat room, you had an application that already implemented a chat room, but it wasn't real-time.
You had to refresh the page every time.
You could layer a durable object in there and sort of cache those messages in the durable object and then create a WebSocket connection from the client to the durable object.
And you sort of built up real-time communication without really adding much infrastructure at all and doing tens of lines of code.
So that's sort of the different use cases for those two, I think.
That's great. So those are the ones that KV and durable objects are really well-suited for.
Are there some use cases that maybe they're less well -suited for or that maybe users would want to go to some external third-party database?
Yeah, sure. I mean, I think with KV in particular, there's one use case that's not well-suited for it, which is like architecturally, which is when you can't work around the eventual consistency issues.
The second is both of these APIs are key value APIs.
So if you're trying to do analytical queries or any sort of long -running queries, you're not going to be able to do that.
You're going to have to replicate your data out to another system. And then finally, I think for durable objects, there's a few design considerations.
They're really powerful, but at the same time, you need to make sure that you're breaking up your application in the right way.
So durable objects scale out really well across object IDs.
So for example, if I was storing the users of my application in a durable object, my object ID would be perhaps their email or some random hash that uniquely identifies a user.
That's a pretty good use case because durable objects scale well across given IDs, but they don't scale well vertically.
So if I put 10 users into a single object and tried to drive a really high request rate to that single durable object, the object would slow down.
And the reason for that is because the object is single threaded to enforce ordering guarantees.
So there is a bit of an architectural decision to make there.
And I think making that decision right takes a bit of thought about how your application is going to grow.
And so I think that's just a caveat to be aware of.
Great. Sounds good. So now that we kind of have a little bit of background on what Cloudflare offers with Workers KV, with durable objects, Abhi, maybe you can tell us a little bit about what we've launched today as far as some of our new partnerships.
Yeah, thanks, Jen. Thanks, Greg, for the high level overview.
So today we launched two partnerships. One is with Macrometa and one is Fauna.
Both of them are globally distributed edge database where it complements our offering today from database standpoint and use case standpoint.
Macrometa specifically, they're peered across our point of presence in 20 different locations across the globe.
So if there is a request coming in from a specific location, it gets to our POP first and then gets to the nearby Macrometa locations to process those requests.
So it's faster, it's easier, it's developer friendly.
But kind of zooming back a bit, I guess, the whole purpose of this partnership is really opening up the use case that a developer can build in cloud and workers today.
And so really making sure that the use case that's not supported today will be supported with this database partnerships and things like e-commerce where you have requirements for different kinds of database like search database or graph database or document DB and things like that.
So really getting help, partners help to augment that and create a complimentary solution to serve a broader array of customers and use cases.
I know we've already had some great reception this morning from folks excited to have these simple integrations.
So you mentioned Macrometa and Fauna. I'm wondering if we can do a little bit of a deep dive on each one.
So Abhi, maybe if you want to talk a little bit more deeply on Macrometa, kind of exactly an overview of what the company is, how is it really well suited to complement Cloudflare workers?
And also if you can touch a little bit more on the performance piece. I know you talked to that in the blog and I think people would be curious to hear more on that.
Yeah. So starting with Macrometa, they are sort of serverless globally distributed API based, SDK based data platform where Cloudflare customer, worker customer can literally call Macrometa through a single SDK that they have built tightly integrated with workers.
It's super easy to call. And also it's kind of a single multimodal API where they have capabilities like search database, stream processing, document DB and things of that sort where kind of full stack application would need different capabilities in one single API.
So it's really make developer experience kind of at the front and center, which is our motto as well, kind of keeping developer first.
So that's one. And also sort of coming to the next part of latency, what we have seen so far with Macrometa, 99 percentile of request gets back about 75 milliseconds.
And I'm going to do a demo a bit later where it kind of shows that kind of good combination between Cloudflare workers and Macrometa where most of the requests are served pretty fast, even complex e-commerce search use cases or graph use cases.
And one of the primary, the benefit comes from being at the fully distributed at the edge.
Some of the areas that Greg highlighted that might have we have had as an industry limitations of being in cloud and being in single region and Macrometa really enables Cloudflare being Macrometa and Fauna being fully at the edge to be able to serve those requests and have the state nearby the pop as opposed to being one single origin.
But Jen, you also mentioned that Macrometa is SDK based.
So can you just share with us, like what are some of the SDKs that developers can use today?
Yeah. So they have actually built two SDKs.
And the second is they have built a Dynamo based client SDK to work with AWS based Dynamo applications.
And again, the codes are fully on GitHub and the links are there in the blog.
People are free to go ahead and try that. It's super easy to make database call of different kind of database call from single point of reference where anybody involved in database would understand that there's so many different kinds of database to serve different kinds of requests.
And really kind of thinking a step back serverless aspect that Macrometa and Cloudflare both brings into taking this complexities of different kinds of database requests and kind of making it very simple with one API call.
And that's the SDK that they have built.
That's great. And so you touched on in the beginning when you were introducing our partnership, some of the new use cases.
You mentioned e-commerce. There's several more that you wrote about in the blog, like the Jamstack backend, IOT, ad tech.
What are some of those other ones? I'm sure I didn't name all of them just there.
Yeah. So yeah, there's a bunch of use cases that Cloudflare customers can cover now.
And I think before taking a step, diving deep, taking a step back is that all these use cases will vary different kinds of complexity, even within the use case, like on e-commerce, you can have a very simple e-commerce application, right?
But you can also have like a best buy kind of applications, very deep in terms of the tech stack that's being used.
So there are many sub-use cases inside those bucket can be fully covered with our own products today.
But if users are kind of taking a step forward from there and do, let's say, a search-based model or RDBMS-based model or existing database they want to use with their e-commerce application.
So then they can think of our partnership or partner solution. So to answer your question, there are e -commerce, there are data localization, gaming, ad tech, cybersecurity, IOT, but all of them kind of centered around the fact that is having database capabilities, compute capabilities at the edge beneficial for that use case?
If yes, then it makes a lot of sense for that specific segment and use case to be fully at the edge, not only just the compute, but also the data operations as well.
Great. And you might've touched on this, but can you just remind our audience here, how would they actually go about integrating with like MacroMeta?
What resources have we provided and how would that integration look?
Yes. So there's a good documentation that both MacroMeta and Pona has provided in terms of what the integration looked like.
It's really creating an account with Cloudflare, creating an account with MacroMeta, and then there are various CLI operations that's been kind of described on the use case page.
But it's super simple.
I think that the reason it's a good match in partnership and in M&A in general, there are a lot of companies out there, but in very few cases, you have one plus one equal three, right?
Because a lot of things has to match, I guess, in this case where all three companies have kind of similar vision about being developer first, being fully serverless, being fully distributed.
A lot of the things has sort of aligned.
So that makes sense in this case to partnering with these players, but there are very good documentation on the blog on exactly how to integrate with these players, plus there are a few drivers as well that Greg can walk the audience through to how to connect to some larger database like Dynamo and Aurora.
So before we change subjects on Tifana and some of the other partnerships that you mentioned, I have two more questions, maybe Greg, you can answer and then Avi, I'd love for you to get up the demo that you mentioned regarding e -commerce.
So two of the questions, Greg, I'm wondering if you can touch on is, can you explain a little bit of how Macrometa works behind the scenes?
I know that in the blog, we talked about how they combine typically disparate data services and APIs.
Maybe you can walk us through just that part a little. Yeah, sure.
So Macrometa under the hood is basically they replicate your data in a common format and then they present different query interfaces over that data.
So what that means is that the actual application of the data is happening at a different layer from a presentation and that's how they're able to replicate across regions with relatively low latencies because they're not replicating your query or anything across regions.
They're just replicating the underlying specific data, if that makes sense.
As far as which interfaces they have, yeah, there's a KV API, there's a document database API similar to Mongo.
They have a DynamoDB API, a graph API, and then a search API as well for the database.
Great. So Abhi, I think we're coming up to the moment where you can show us a little bit about what this looks in practice.
I'm just praying the demo god today is kind to me. So let me just share my screen.
Can you see the screen? Yep. Looks good. All right. So let me make it bigger.
All right. So what you're seeing here is really a bookstore application that's fully built on workers.
If you look at the URL, it's kind of very evident.
It's built on workers, KV, and Macrometa solution. And it's full fledged in terms of all the complex database operation that is required in an e -commerce application.
So search database is needed when you're searching through millions of records of SKUs for different kinds of books and items.
The recommendation engine that runs in the backend based on clickstream analysis, that's clickstream database and analysis that happens through a specific kind of database.
Then you have graph database, document database to store this catalog information.
Sometimes it's smaller images, but sometimes the images can very high resolution and largest images.
So under the hood, there's a lot of different kinds of capabilities required to support a full fledged e-commerce application.
So what we have really done here is that we have built exact same bookstore application with workers and partner solution.
And we've also built exact same applications on some of the clouds like AWS.
And we've just compared what sort of the performance look like for different kinds of operations in both the places.
So as you can see, I don't know if it's too small on your right, is some of the latency- Aziz, do you think you can maybe blow it up a little bit?
Yeah. Give me one minute.
I was actually, what it does on the, it increases on the left side with the right side.
Yeah. So it's probably too small, but the latency is here is like 30, 60, 70.
In most cases, it's always two digits, which is big in terms of operations it's doing, right?
It's not just pulling up images from cache from end of node CDN, but it's actually going to the database, querying, mutating data and searching through different records and things like that.
So if I click on cookbooks, it actually loads all the product catalogs here within 60 milliseconds, 63 milliseconds, right?
If I go into a specific catalog, it will pull up specific information for that book.
And same thing happens across all these categories as well.
And if I go ahead and add to cart and go ahead and basically do a checkout and it's just a demo.
So here, so all of these kind of operations are in two digit millisecond, which is kind of very powerful.
Taking a step back is the fact that the entire application is built at the edge where with Cloudflare Workers and macrometa, most of the requests are being served at the edge where it's really kind of taking a step back.
Coming from the perception that these are the application cannot be built at the Cloudflare CDN is kind of, we want to take that perception out that we can build more and more complex use cases with Cloudflare products and partner solutions.
And I'll stop here. That's great. It's really impressive to see that performance as low latency as it is.
So thanks for sharing that and getting a visual.
Yep. So we've talked a bit about macrometa and I'm really excited to see what our users are able to build there.
Now I want to transition to Fauna. Before we get into some of the questions I had, we've gotten a question from the audience live and they asked, with this partnership, can we expect some kind of easy migration path between workers KV to Fauna and vice versa?
So I think we'll get to that question as we go through the Fauna piece, but maybe if Greg or Abhi, you want to say a note on this question in particular from the audience.
I'll let Greg that.
Greg, there's an API for Fauna, right? For the existing database.
Yeah. So, I mean, I'm interested in what you mean by a migration path there.
As far as being able to read from KV and write to Fauna, that will certainly be supported, like an official migration tool or anything like that.
I don't think so.
Like, I still see them as two separate products with different strengths. Yeah. And so I think there's different use cases for both.
Great. So why don't we talk a little bit more about Fauna and Abhi, could you give us just a little overview on exactly kind of what the company is in general?
I think the overview for Fauna, it's a bit different in terms of the company's position with Macrometa.
It's both are very kind of globally distributed, fully edge database, but Fauna focuses more on the serverless aspect and API driven offering.
So they have a couple of APIs where it's, again, very tightly integrated with Cloudflare workers, but kind of focus really here is fully developer-friendly and users can make API calls to do complex operations and kind of take this complexity out of 10, 12 years, 15 years back where you had a database layer, you had a service layer, you had a front-end layer and database team would not talk to, the teams won't talk in between each other because everything is kind of done in a silo.
So that complexity is being taken out on the database layer and through one API, which is their own API where kind of all the database operations, indexing, partitioning and whatnot might be required in the backend, kind of they're providing that in a fully serverless manner.
That's sort of my input. Greg, feel free to add anything I missed.
Yeah, I mean, I would add into that. Basically the goal of Fauna, I think, is to have a data API that just works for your use case.
And so what that means is they've built, and the way I'd characterize them against Macrometa is they support transactions and support strongly consistent access to your data.
And so they've created this data API, which is sort of a novel idea, which the idea being, you push authorization and authentication out to the client.
In this case, a worker could be the client, but still supporting transactional access.
They have their own query language, FQL.
They also support GraphQL and sort of giving you a way to do the same powerful things you're used to in a relational or on-prem database without worrying about any of the scaling concerns, as Avi mentioned, but also by pushing some of those features out into the client versus doing them in the database.
Great. And Greg, maybe if you can speak a little bit to how this will complement workers.
Yeah, no, I think it fits with the worker model really well, right?
It's a globally distributed backend that you can interface into from workers.
And as far as the authentication model, being able to do authenticated access either from a worker or from a client is great.
You can also utilize the cache API and Cloudflare to cache accesses to specific Fauna keys and rows.
So they fit together pretty well.
Yeah. Great. And so what are some of the resources that we have with the integration?
I know I saw Fauna even put up, I think it was a tutorial or something like that, but maybe you can tell us a little bit about those resources.
Yeah, we have a tutorial up between the two. And so you can access that.
We also have a published. And the full Fauna API is available from within a worker today.
So any of their getting started tips and things like that will also work.
Just use the Fauna package, the Fauna driver, and you're good. Great. And speaking of that use case, Greg or Abhi, maybe you can tell us a little bit about the AI powered voice assistant case study that you wrote about in the blog.
Yeah, I can give two liner.
Greg, feel free to kind of give more detailed technical technicalities behind that, but they've built really cool application where it's kind of context-based search, where based on what you're searching, based on what you're seeing, the search happens based on your past input, not just the last kind of search that you did.
So that's what the context in AI-based kind of output, but they're using Cloudflare Workers at the edge to train those algorithm.
And at the same time, Fauna locations nearby our edge to kind of gathering this data, making those algorithms more smarter and smarter.
But it really shows kind of the point that even AI-based applications where there's a lot of data operations involved can be built with Cloudflare Workers along with our own products or partner solutions.
And what are some of the other example use cases we might expect to see with Fauna?
Are they going to be the same as you described with macro meta, with e -commerce and Jamstack backend, or are there anything that will be unique to an integration with Fauna?
Yeah, I think the Fauna model is actually similar to something like Firebase.
So I think we'll see a lot of people using it in a similar way to that.
And also the strong consistency piece as well. So when you need to run transactions, you need a fully featured query language, I think we'll see people use Fauna for those kinds of use cases.
Great. And just to wrap up here on the partnership side, can you share a little bit about what are some other existing database partnerships we have beyond just Fauna and macro meta?
So these are two partners. I think the reason we announced strong partnerships here is because these are sort of technology platforms that we think are building in the same direction as workers.
That said, there also are many other databases out there that people want to connect to.
And so what we've done is we've talked about two in particular, both on AWS right now, DynamoDB and Aurora, which you can connect to over HTTP today.
So the new DynamoDB driver that's out, V3 of that Dynamo driver works.
And also the Aurora HTTP API for Postgres or for MySQL works today.
The main limitation for connecting to other databases is the fact that workers don't support TCP connections.
So if there are other databases that support HTTP connections, like Firebase, for example, you can connect to them.
It's just in Firebase, you have to use the HTTP API. There isn't full driver support, I don't believe.
So yeah, if a database exposes an HTTP endpoint, you most likely can connect to it.
But these are two more that we're adding with DynamoDB and Aurora, where we have examples in our docs.
Great. Well, I'm excited to see what our users are able to do now with these integrations and our existing offers with KV and Durable Objects, and now even more partnerships there just to enable our users the best experience and kind of meet them where they're at if they want to expand some of their use cases with third-party integration.
So I'm excited to see what that enables. So folks can continue to feel free to add in questions on Avi and Greg's piece here on database partnerships, and we can try and get to them at the end.
But at this point, I want to pivot into two other exciting announcements that we had today.
One of them was announcing our journey to Node.js compatibility.
So Albert, maybe you can get us started there and just maybe start by giving us a quick overview, just depending on who the audience is that are tuning in.
But tell us a little bit about what Node.js is and what it will offer developers.
Yeah, absolutely. And that kind of ties into databases and what Greg and Avi are doing to unlock more people.
The blog that got released today for Node .js support is just to publicize our roadmap.
It's not announcing new support for Node.js, but kind of covers like what we do sort of cover.
And it's used by like probably every enterprise or company you've heard of.
And the real value of Node.js is it was able to take code from the server and move it into the browser almost magically.
What resulted is a large ecosystem of packages for Node.js and the NPM registry.
There are over a million packages for Node.js.
Now where workers comes in is we support compiling into Webpack.
And for Webpack, we have about 20,000 plus packages you can run.
So you can see there's a lot of room for us to fully capture more of the packages out there.
So maybe you can tell us a little bit. Workers, we do offer some support today.
Maybe you can explain a little bit about what it looks like today. Totally.
For workers, you can run any Node .js dependent package if it uses Webpack or a bundler.
And what you get is like React, Gatsby, your favorite frameworks are able to run in the browser with workers.
And I know that you also had a very handy dandy piece on our website that you kind of share like a bunch of different packages and whatnot.
And you might be able to get that up and show folks so they can check it out as well.
Yeah, totally. You can see my screen here.
So this is just a React site with Airtable inside where you can submit to Airtable and then your submission will pop up actually in this graph.
These are all community supported packages that run with workers. You got routing, you got authenticating, and a whole bunch of other stuff.
You can query to your API with GraphQL.
These are sort of what we get from the community and they should just work.
That's great. And I think while you have your screen up, in the blog, you talked you pointed to an example with Gatsby.
Maybe you can quickly flash that so folks can kind of just see what it is and maybe check it out later.
Like I said earlier with deploying with Webpack, this allows you to pick your favorite Gatsby template.
As long as you follow instructions here, like make sure you download Wrangler, make sure you create the file with Wrangler in it.
And then in that file, also make sure you access into your account.
This is a little outdated.
You should do a Wrangler login instead. And here, you simply, yeah, just make sure you have Webpack and it's linked to your workers account.
There's also an easier way to do this with Cloudflare pages, which you should definitely check out.
But the site over here is also React compiling through Webpack. So, it's, yeah, just an example of the degree of node module support that has helped take us pretty far for three years.
Great. And so, maybe, Albert, you can touch on what are some of our next steps here.
I know in the blog, you talked about increasing worker sizes, supporting native APIs.
Maybe you can walk us through that a little bit. Yeah, absolutely.
So, a great way for people to vote on what node JS dependent packages or APIs they want us to support next is to vote on this site.
It's workers .Cloudflare.com.
And here, you can also add to that as well.
The immediate ones that we know we'll build support at some point are the Stripe SDK, the Twilio SDK, as well as some popular database libraries, which, of course, whenever Greg and his team have bandwidth, that will be exciting.
Great. And so, do we have an idea on what our timeline is for this?
I know we're in the initial phases of gathering this data from our users, but do we have any sense of when we might be able to integrate with some of these packages?
Yeah. We have a lot of engineering work to do. Engineering work that I don't even understand, but I did ask our engineers how tough this would be.
We are also built off of Chrome V8, just like Node.js, but we use different libraries for input and output, and we also wrap around Chrome V8 differently.
So, it's going to take some time, but hopefully by the end of this year, we'll have a really exciting update, and we'll prioritize the packages the community thinks are most important.
That's great. And is there anything else future looking?
I know right now we're going to be integrating just maybe these particular packages that are popular with our users.
Do you see the future will have full support beyond just these popular packages?
Yeah. I think with the current environment with workers, it gives you enough to build entirely stable applications, but with Node.js support, it will help people migrate their applications from other providers to us, so you can take advantage of our network and all the other great benefits that come with it.
I'll also jump in and say that particularly for developers who have used workers before and really want to use a certain library, definitely make sure to visit the website that we just linked to vote on libraries that we can look at.
One of the challenging parts of supporting Node.js is obviously the runtime is very big, and so we have to prioritize.
And so, if we have a good sense of the libraries and packages that matter most to you all, that will give us a better sense of where to start and what to do.
So, very much, please make sure to check out that website.
And it's also a great resource as well if you need tips on different libraries that work.
And that one is the workers.Cloudflares.node.
Is that right? Great. Yeah. So, folks can feel free to check that out, and hopefully we'll have more news in the next several quarters as we embark on this endeavor.
Albert, anything else here or Ashkahn that you want to share on Node and kind of our journey?
No, I think we should move on to Ashkahn stuff.
Great, great. So, Ashkahn, you've been very patient here, but we're really excited to talk about all of our new developer experience improvements that we've launched today, and you wrote a great blog about that.
So, maybe you can just kick us off and tell us a little bit about what we launched, maybe by starting with the workers.new and tell us what it is and how it will help our users.
So, how we see workers and what customers tell us about workers that they really like is solving problems quickly.
And so, on one side, we have applications that might use Wrangler that have a lot of dependencies and are maybe you could consider full stack, but there are also a good amount of worker use cases that are very quick, easy fixes.
Things like you just deployed your website, but there's a broken link and you need a quick fix.
And so, workers is really great for those also quick fixes.
And that's why we decided to launch workers.new, which is a very easy and quick way to create a new worker and launch it directly from the dashboard editor that we have.
That's great. Do you want to get that up and kind of show folks what that looks like?
Yeah. Let's take a look at this. So, all right. Can you see my screen?
Yep. Looks good. All right. So, I have my web browser right here.
And so, I have a worker, web pack worker that I have here. It's a very simple worker.
It's just going to return the current time. And so, what I can do is I'm going to go to workers.new.
And I automatically get redirected to pick my account.
Most people, they'll get sent directly through, but I have multiple accounts.
And so, I have to click which one. And I get taken immediately to the worker editor.
And I can go in here and change things. So, I'll say, you know, hello from Cloudflare TV.
And of course, you can add as many things as you need in here.
We also have a preview and HTTP editor. So, I'll go ahead and save and deploy.
And I can go ahead and check out my new worker. So, hello from Cloudflare TV.
And then we also have the preview and HTTP test here. So, if you don't want to try it on your web browser, there's an embedded way that you can quickly and immediately test changes.
And so, this is different than if some of you have used CloudflareWorkers.com, what we often refer to as the preview service.
This code is actually running directly on our edge and will be the exact same experience for when you deploy an actual worker.
And in fact, this worker is in 200 locations around the world, more than 200 locations around the world.
And so, I think I'm not aware of a world record for the fastest time to deploy a serverless function.
But I'd have to wager we have a pretty good chance of breaking it.
Wow. That is very exciting. So, it does look quite a bit like the CloudflareWorkers.com playground.
But like you said, the playground's more for testing and sandboxing and the workers .new, you can distribute it, have that code run live.
So, those are the two main differences then, right?
Exactly. And because we have a really generous free plan, we highly recommend that people use workers.new when they want to test and play around with workers.
Even if you don't want to deploy something, you actually don't have to.
That's great. And can you explain a little bit how did we come up with this?
Was this user feedback? Were we just, you know, constantly trying to find ways to improve the developer experience?
Or what was the process to come and make this come to life?
So, this was actually an idea from inside the team. And so, some of us are somewhat inspired by Google Docs has a docs.new, very similar concept, great to new Google document.
And we really wanted to push and help people make workers really fast.
And so, that's why we got the domain and decided to make workers.new, which, fun fact, is built using a worker.
So, the workers.new worker is two, three lines of code.
And that's it. So, it's really exciting what we get to show off using workers using workers.
Yeah. That's great. Very meta.
That's awesome. Anything more you want to tell us about the workers.new before we get into the next announcement, which was the customized build scripts?
I think we can get into the build scripts.
And for anyone that's used Wrangler, like workers.new in the dashboard that we have, as I mentioned at the outset, is really good for those smaller use cases, those quick use cases, like modifying headers and redirects.
If you want to have a more robust application, you want to import an NPM library using the library list that Albert was talking about, that's where we recommend you use Wrangler, which is our command line interface that allows you to build and deploy your workers in a really seamless way.
And some of the feedback that we had gotten since we had released it was that people really want to customize their builds.
For some context, Wrangler previously had three kind of fixed build processes.
And that worked really well, and that served us well, and we will continue to support those project types for customers that really like it.
And so that's why we decided to release custom builds, which will allow you to specify essentially any build command you want, provide the directory and the files you want to upload, and Wrangler will automatically handle all that and publish it like workers do today.
And can you tell us some tips for users to get started? I know you talked about needing to have a certain Wrangler release.
You had a couple points of documentation.
How can, yeah, what are some of those tips to get started? Yeah, absolutely.
So to get started, make sure first that you have downloaded the latest Wrangler release 1.16, so 1.16 or better will do.
And for the documentation, so we updated documentation this morning.
If you go to the Cloudflare Workers documentation, there's a section for Wrangler, CLI, and then there's a subsection for configuration.
And that will basically give you all of the possible options for the Wrangler configuration, which is wrangler.toml, the wrangler.toml file in your project.
And that will specify the different ways to use custom builds.
Right now, there are two somewhat distinct ways to use the custom build feature.
So the first one, and this is the one applicable to most people, will be what we're calling a service worker format.
And the service worker format is, if you haven't heard of service workers, it's the API, the web API that workers was inspired by.
You can tell if something is a service worker if it has that little add event listener, usually at the top.
And so for those projects, which are so far the majority of workers projects, we have a separate configuration section, which will let you build that worker, because it needs to be bundled into one file.
So the service workers need to be in one file. And then the second version is ES modules.
And ES modules, this is a new thing that we're starting to experiment and roll out with.
If you use durable objects, you need to use ES modules.
And that's part of what we have with custom builds. Now you'll be able to include multiple modules into your worker deployment.
So no longer do you have to bundle all into a single file.
You can upload a text module, a WASM module. So we support a different array of types, and they're all in the documentation.
And so if you're opted into the durable objects beta, you will be able to upload workers using this format.
And so those are the two distinct ways you can use the custom builds feature.
We're really excited about it, and we're really looking forward to getting some of your feedback.
We already started to get some of your feedback during an early release candidate period that we had had a couple weeks prior until today.
And we look forward to building out support for both types of formats. Great. And was there any sort of feedback in particular that you took, or were able to implement from that early release period?
Yeah. For the early release period, one of the bigger changes we did, and to those people who may have used the Wrangler release period, make sure to check out the documentation for the latest differences.
One of the big pieces of feedback was being able to customize the types of modules.
However, we also support importing things, as I mentioned, like text and array buffers and things like that.
And so, people want to customize different extensions and paths of the files they want to upload.
Here are the text files, and they're this type.
And so, you'll be able to have full customization over exactly what gets uploaded and where.
And so, all of that, again, is in the documentation.
And if you want, you can add your own rules and customize it from there.
So, we think for developers who are really excited to start off with ES modules, this is going to be a big, big improvement and a new chapter for workers.
Because ES modules are really the way we're starting to move towards.
MARIAH MOONEN That's great. I had only known a little bit about this part.
So, I'm personally happy to hear some more information, and hopefully those listening on the line are as well.
Just we only have a few more minutes here, but I do want to make sure we talk about the last piece, Ashkahn, of your blog.
And that was on viewing logs and exceptions. So, maybe you can tell us a little bit about what's new there.
ASHKAHN Yeah. So, there are two parts there.
One is what we released today, and the other is a little teaser towards the end.
And so, one of the features of Wrangler is Wrangler Tail. And Wrangler Tail very simply allows you to see a live stream in your Wrangler terminal of logs from a worker.
So, if you have a deployed worker, and it does console.log or console.born, all of those events go to your terminal, and you can see it live.
Previously, we had only supported a JSON format. So, when you ran Wrangler Tail, it would just format kind of lines of JSON.
It's great if you're piping it to another destination or like you're saving it in a file.
It's not so great to exactly see.
And so, that's why we introduced the ability to have a pretty format.
So, you can now change the format of Wrangler Tail. Allows you to very cleanly and nicely see the logs from your worker.
So, that's part one, and that is in the release Wrangler 1.16.
You can try it out today. You just specify dash dash format, and then either JSON or pretty.
And then the teaser is I think it's self -explanatory.
You know, one of the things we are starting to look at now that workers has durable object support and WebSocket support, internally, we were kind of thinking, well, huh, now that we have that, you know, it's a really great platform in order to build a logging system that uses the dashboard.
One of the reasons we don't have Wrangler Tail in the dashboard right now is because Wrangler Tail doesn't use WebSockets.
But if you have a single or several coordination points, and you can use WebSockets in the browser, that enables us to bring logs from the dashboard.
So, that is a really exciting thing that you should expect our team to be coming out with soon.
We're still working on it. But as you can tell, there is a little bit of work, and we wanted to give a little preview for our developers today that's coming down the pipe.
That's exciting, yes. I saw that preview piece at the bottom, and it will be interesting.
Any idea on when we might be able to share more information on that preview?
Soon, TM. You know, in the coming months. Great. Great. Well, that was very informative, Ashkan, and it's exciting to see from serverless week to developer week, all that we're doing with developer experience really at the center of trying to make our users' lives easier.
And, like, you named some in particular with Wrangler and the workers.new.
Abhi and Greg talked a lot about partnerships, which is also to try and, you know, make developers' lives easier.
And the Node support is in that vein as well, and trying to simplify that experience.
So, it's great to see all of these pieces tying together.
Just in the last, we only have 40 seconds left.
We have one quick question for the audience. Albert, this is for you.
The question asks, do we have an ETA on the official Stripe support dropping in workers?
That was mentioned in the blog post. I know the official ETA, but just hop into workers' Discord.
You'll get the latest news there. That's great. And, yeah, that's a good plug.
If folks want to continue chatting about this, hop into our Discord.
We have a lot going on there. But let's just wrap up now. And thanks, everyone, for tuning in.
We still have some announcements coming out tomorrow for Developer Week.
So, don't tune out yet. And thanks, everyone, for joining Abhi, Albert, Greg, and Ashkan.
It was a great session, and I learned a lot. Thanks, everyone.