💻 What Launched Today - Friday, May 19
Welcome to Cloudflare Developer Week 2023!
Cloudflare Developer Week May 15-19, our week-long series of new product announcements and events dedicated to enhancing the developer experience to fuel productivity!
Tune in all week for more news, announcements, and thought-provoking discussions!
Read the blog post:
- D1: We turned it up to 11
- More Node.js APIs in Cloudflare Workers — Streams, Path, StringDecoder
- Cloudflare Queues: messages at your speed with consumer concurrency and explicit acknowledgement
- Workers Browser Rendering API enters open beta
- Developer Week Performance Update: Spotlight on R2
Visit the Developer Week Hub for every announcement and CFTV episode — check back all week for more!
Good day, good day, everyone, and welcome to What Launched Today, our final day of Developer Week 2023.
My name is Leroy Lasenburg. I'm the product marketing manager for the developer platform.
And today I'm joined with my esteemed guest from the product marketing, I'm sorry, product management team.
Gentlemen, introduce yourselves.
Matt. Yeah. Hey, folks, I am Matt. I lead our databases product team and a few other pieces.
And I'm going to talk a little bit today about D1.
Celso. Hi, everyone. My name is Celso. I'm an engineering director based in Lisbon.
I run a couple of projects, one of them being the rendering APIs we're going to talk about today.
And Brendan. Hey, I'm Brendan, product manager for the Cloudflare Workers Runtime.
And last but definitely not least, Charlie. Hey, everybody, I'm Charlie.
I'm the product manager on KB and Qs. And I'm here to talk to you about the great things that are going on in Qs today.
Right. Now, we've launched some amazing announcements today.
So, we'll start with you, Matt. Matt, tell us what we launched today.
Yeah. So, for those of you that don't know, D1 is our serverless database.
It's based on SQLite. It uses the familiar SQL query language. And we launched D1 into OpenAlpha last November.
And so, since then, we've had a ton of people using it, like thousands of databases, tons of people sort of banging against it.
We've been iterating, resolving bugs. But one of the core parts and I think one of the interesting parts about an alpha is it's an alpha, right?
Part of that is learning and part of that is being willing to know what to sort of throw away.
And so, a lot of the stuff that we've learned around the client API, around how people want lots and lots of databases, sort of general query performance, general API performance.
One of the parts that we haven't been happy with is the storage part.
And so, we've been secretly, you know, over the last sort of two, three months, have been working on a big new storage backend for D1.
And so, we announced, as a really cool part of the announcement today, the ability to opt into that new storage mode.
And that'll become the default in the near future.
And it is so much faster than the previous model. Kind of new in this, I've been having a lot of fun building and prototyping behind the scenes.
But with a sort of bunch of representative queries that we have, it's about 37 times faster, 36 times faster.
You know, a sort of Northwind traders, which is like really this really sort of popular demo data set that sort of, you know, models an e -commerce site.
And it's fairly large. About 37.8 milliseconds with the previous version of D1, which we weren't happy with.
We knew was not fast enough. The new model is about 1.8 milliseconds for returning that same query.
And we think we can actually get that down below a millisecond.
We kind of know where we're losing a couple of milliseconds there in our infrastructure.
So, as we like to say, we're kind of just getting started.
We also, I think, we also have been obviously really focused on like, how do we make sure that D1 is competitive for a lot of the other options?
Not just the thing that we've built, but also when folks are looking at other serverless databases.
And so, one of the really cool things about the new storage backend is just because of these incredible performance increases.
We're about 3.2 times faster than a couple of the really popular serverless Postgres providers.
Obviously, recognizing that, you know, benchmarks are benchmarks and it's always hard to find a query that represents everyone's use case.
But we have queries here that we use to benchmark ourselves as well.
And so, keeping that query latency down, being as fast if not faster than what folks are seeing from really popular Postgres or MySQL options is important for us as well.
So, performance is huge.
The system is way more reliable. We are going to be building on this for the future as well.
And so, everybody can start using that by passing the new experimental backend flag when they create a database.
That's all live today.
You should definitely check out the blog. And without stealing too much time, there's a bunch of extra pieces that we've been obviously iterating as well on the overall developer experience.
And so, we have formal support for JSON functions now, fully documented.
You can do a lot of really interesting stuff there by operating over JSON in your database, right?
So, that can be great if you maybe haven't defined a schema yet.
You want to kind of maybe treat D1 as a NoSQL database, right?
But you don't want to pull everything back into your application and then have to obviously slow down your query response or have to deal with all of these bytes that you're now having to serialize.
You can query into JSON objects, into JSON arrays.
You can update fields on the fly directly in your query, which is really powerful.
And for those of you that use something like Postgres instead of JSON or JSONB functions, this will be really familiar, equally as powerful in many ways.
We have a new console interface that actually lets you edit and sort of play with your data on the fly as well.
So, maybe if you're really familiar with the command line, that's great.
But maybe you're just sort of prototyping, getting your schema right, exploring your data, trying to understand, you know, do you need to sort of make any changes?
Maybe you're just learning those JSON functions, right?
It's nice to be able to jump into the Cloudflare dashboard, open up D1 database and start just querying it directly through the UI there as well.
And the last really fun one is what we call location hints. And so, what obviously we do is really smart.
And we will place the leader, so the initial database that you create, close to you.
But you're not always maybe in the place that you want it to be.
Maybe you're creating it from a CI system. Maybe you've got a distributed team.
Maybe you want to create multiple databases in specific locations, right?
And so, we let you provide us a hint. And you might say, hey, Western North America or Western Europe or Eastern Europe APAC.
And we'll make sure the database is created sort of close to that region as well, particularly for write performance.
So, that is the big interesting piece. We've also announced pricing for D1.
And so, one of the really important things for us is like, you know, for those of the folks who've been using the alpha, right, that's going to remain free.
So, it's the same things we've been giving away in the alpha. As we go forward, right, 1 gig of storage, 10 databases, that'll just be free forever.
And we'll be including a really solid amount of what we call read units and write units for obviously querying and writing to those databases.
I won't dive into all the pricing details.
Obviously, the hardest thing to explain on a Cloudflare TV session.
So, I would say check out the blog, read through it. Obviously, this isn't final.
Part of that's because we want to sort of obviously get more feedback from folks broader over the next few months, so we can get pricing into the right place.
But we want it to just be really, really accessible to start building on D1, right?
That just has to be obvious. And so, we wanted to make sure the pricing aligned with that as well.
The last major piece, and this was a fun one.
So, today in the original kind of D1 system, we just took pretty traditional database snapshots.
We did that hourly, you know, these sort of fully formed snapshots.
And that's great, but like, what is a snapshot? What if you break your database?
What if you drop a table, you do a bad insert that doesn't have a...
Very classic one, you don't put a where clause in your insert, and you end up just overriding every row in your table with, you know, Brendan's email address instead of actually just Brendan's user ID.
And so, that's great. But again, you know, maybe the next backup is going to go in a minute.
And so, you've got 59 minutes of history now, and you've got nothing you can do about it, do you have to take a backup manually every time?
Like, it's just so easy to forget. And so, there's this concept called point-in-time recovery, and a lot of major database providers have it, but it's often horrendously expensive.
It can be fairly limited.
You've got to turn it on, and you're paying for it all the time as well. It really sort of adds up.
It's often a big multiplier on your overall costs by a significant amount.
But the way we've built the new D1 sort of storage system, let's just kind of do that in a much more accessible way.
And so, what we're doing, and we're calling this time travel, it fundamentally lets you restore a D1 database to any minute within the last 30 days.
Every database has it. It's included in the cost of D1.
So, whether it's within our free tier or free included limits, or whether you're paying on top of that, we don't charge you extra for time travel.
And we're still iterating the API for this, but fundamentally letting you go back to a specific timestamp and say, hey, take me back to just before this timestamp for the last transaction.
And then also considering exposing transaction ID. So, if you make a query, or maybe you make a schema change, anything you operate on the database, we can give you back a transaction ID.
And then you can take that, and you can go and reverse that by just telling us to roll back to a state before that transaction ID, or to even copy the database.
Branching is a really familiar term these days, but with D1, it's really easy for us to just give you 25, 30, 40 databases or more.
Time travel and fork the database off or branch the database off to an entirely fresh database.
So, you can have multiple copies.
You can keep your production database or the database that you have now.
You can make a branch based on a previous timestamp or transaction ID, and you can keep kind of branching those out.
That can be really useful for snapshotting, the testing, for maybe testing set of schema changes, and then you can kind of roll that back in as well.
So, we're really sort of diving deep in the time travel part.
Again, we're trying to make it easy for folks to not run the loss of losing data, maybe because they're learning SQL or just because, right, as you're iterating or building things, right, people make mistakes, right?
And it can be really punishing if we're telling you to have to make backups on your own.
So, I'm really excited about that.
I think that's going to be really interesting for D1.
There's a lot more coming. Definitely go read the blog post. There is a lot more coming over the next few months for D1.
And, yeah, we'd love more feedback. Awesome.
Thanks, Matt. Appreciate that. And recently we were, actually today we announced additional support for Node.js APIs.
So, Brendan, can you speak to that announcement today?
And to that end, we've been doing some work over the past coming months, and more is on the way, to support many APIs from Node.js in the workers runtime itself, so that packages that you're bringing in that rely on these APIs work on workers out of the box.
So, back in March, we announced support for our first set of Node APIs, async local storage, event emitter, buffer, assert, and util.
And then today we are announcing support for the Node.js streams API, as well as path and string decoder.
And we're also doing some work on the Node crypto API, as well, that's coming very soon.
And so, kind of together as a set, what this means is that increasingly as you build workers, if you enable the Node.js compatibility flag in your wrangler.toml, you'll have access to these APIs and be able to use them without having to kind of add your own polyfill implementations, where you kind of take an implementation of that API, and you have to add it into your own worker code that gets uploaded to Cloudflare.
And so, we've been working through this set of APIs, and I think what this is going to start to unlock over time is that you shouldn't have to think so hard about, I want to use this dependency in my app, I want to pull in something from NPM, and I want to be able to use this in my worker, will this work on Cloudflare?
We're also working a lot with library authors, we mentioned earlier in the week our announcement around the TCP sockets API and how we've been working with library authors who are making kind of database drivers to make sure that those work on workers, as well.
As part of this, we're also exploring a new mode with workers that makes kind of makes it possible to access an environment that looks a lot more similar to Node.js in other ways.
So, one thing, one interesting thing about Node is that you, there are some APIs that are available directly as globals, things like process .env, right?
And many libraries already assume that these Node globals exist, and when they're not present, they don't work.
And so, we're working towards kind of a way of having a module within your worker that has, that is of the kind of Node.js module type, and when a module is, that you publish is, it kind of uses that type, it has those globals available, it makes it possible to require a Node API directly without the kind of Node colon specifier prefix that's otherwise required.
And so, we're just trying to make it a lot easier for things to work out of the box.
And so, that's kind of already in our open source runtime, and we're working on kind of how does that integrate overall with the rest of our platform with Wrangler and beyond.
And so, this is really a continued stream of work for us.
You'll see more APIs from Node.js landing over the coming kind of weeks and months.
And so, we're certainly not done here, and we want to keep going.
Awesome. Thank you, Brendan. And staying on the topic of APIs, Cecil, we recently announced the browsing API.
Can you tell us about that?
Sure. So, browser rendering APIs were announced actually a few months ago, and they allow you to instantiate a Chrome instance and control the browser from there, instrument the browser from there.
And since we announced the product a few months ago, we've been basically improving developer experience and improving the APIs and using the product ourselves internally.
And one of the big use cases for browser rendering is actually one product that we launched a few weeks ago called the URL scanner.
So, if you go to radar.Cloudflare.com slash URL scanner, that's actually using the browser rendering APIs that we announced a few weeks ago.
And so, today, that's been running internally for us and a few customers in closed beta.
Today, we announced opening the APIs to all the customers in the wait list.
So, it's going to be an open beta of browser rendering.
And there's a couple of things we did in the meantime.
One is we launched the developer documentation for the API. So, if you go to developers.Cloudflare.com slash browser rendering, there's documentation on how to use this feature now.
We also published a public version of our puppeteer fork.
So, puppeteer is a library that you can use on top of Chromium.
Actually, on top of Chromium's dev tools APIs, but these are details. And you can use a high level API library to control the browser using puppeteer.
So, it makes it much easier to instantiate the browser, open a page, go to a URL and perform a number of operations or send events to the browser.
You can basically do anything with the browser that you would do normally and manually as a user.
And so, if you install our puppeteer version, you can actually use it with the browser rendering API.
And you'll have access to the normal puppeteer API that you would use with a normal setup using a local Chromium in your notebook or your server.
So, that was launched as well.
And finally, so, everything's ready to put this in the hands of our customers.
So, open beta means that everyone's on the waiting list can start using this.
It's going to be free for our customers during the open beta.
And we're looking forward to see what the customers start building with this.
So, there's a number of applications that come to mind.
Taking screenshots of pages is one example, but you can do web application testing.
You can convert pages to PDFs if you want to. You can use the browser to do performance metrics or complete scans of sites.
You can generate hard files from pages, which is what we're doing with the URL scanner.
So, there's really a huge number of use cases that we are enabling with this.
So, looking forward to see the feedback from customers from now on.
Great. And we can share and developers can share feedback on our developer Discord community as well as within our community.
There's a blog up today and there are links to both the wait list if people want to join the open beta or they can go to our Discord, developers Discord, and just ask for access or talk to the team.
We'll be there listening. Yeah.
Great. Thanks, Celso. And one of the announcements that I'm really excited about is our updates to our Q's messaging service.
Charlie, can you speak to that? Absolutely.
So, over in the Q side of house here, we've been focusing on two things, right?
We want to get faster and we want to be more configurable for developers.
So, as a part of this effort, we've got two major new features that we're announcing here, right?
So, the first is called batching, right? And the idea is prior to having this feature, if you were processing a bunch of messages in your queue, it would happen one at a time.
It would be synchronous, right? It's like I have this batch, I've processed it, it's done.
Let's go grab the next one. With this new feature called batching, what you can do is you can have many batches at the same time be processed all at once.
We'll spin up multiple consumers to have multiple batches processed at the same time.
Now, by default, this is automatic, right?
So, we have an algorithm that will decide how many consumers should be launched in order to most efficiently process the messages in your queue, right?
But if this doesn't work for you, you can also configure it so that you have whatever amount of consumers that you want in order to process this, right?
And that could be anything from like one to two to all the way up to 10 at this current time, right?
So, this gives you two different ways to handle things, right? If you want, you can go as fast as the system will let you, or you can configure it to be whatever you need for whatever your consumer is doing with your information, right?
So, if there's a certain speed limit there for where your consumer is putting those messages, you can adapt to that as well.
That's thing number one. Thing number two is we have something called explicit acknowledgement.
Now, when you're consuming messages in a queue, you have to either acknowledge or retry this information, right?
Either we got it and it worked and we did the things we needed to do to this message, or it didn't.
So, let's send it back to the queue and try it again, right?
Now, previously, if I had a batch of, I don't know, 100 messages, right?
If one of them failed, we're going to have to go reprocess all 100 of those messages, and that's a terrible experience, right?
That's not something developers want to deal with.
So, with explicit acknowledgement, we give developers the ability of their consumer to explicitly acknowledge each message in the batch, right?
So, you can say for every message in this batch, do some kind of processing, send it to someplace, and once I'm certain that I've done the right thing with this message, I then call message.acknowledge, right?
What this does is it tells the queue, hey, we're good, this message has been processed correctly, so you don't have to retry it, right?
And you can do this individually for each message.
Alternatively, if something goes wrong, if the processing doesn't work, if it doesn't get sent to the place it needs to go, whatever it is, right?
You can call message.retry, which will make this message go back to the queue individually.
So, in this way, instead of reprocessing the entire batch, whatever your batch size is, we just reprocess the messages that either didn't work, or we just process the ones that did work and make sure they go to the right place so you don't have to reprocess them again.
So, these are the two features that we're really excited about.
It gives developers a lot more ability to configure and customize their Q's experience, and you can try them today.
So, Q's right now, we're in open beta, so if you go into your workers dashboard, you can try it out, anybody can use it, and we'd love to hear your feedback.
We'd love to hear what you think of this, what you're building on it, and how this helps you become a better developer through the power of Cloudflare.
Thank you, Charlie. Now, again, that was one of the most exciting announcements for Developer Week, for me personally.
And again, gentlemen, the idea of Developer Week was to, you know, enhance the developer experience to get developers on the platform building and starting to create many of these amazing web applications.
Can you guys just speak briefly about how all of these announcements, including announcements today, kind of enhanced that opportunity to be able to just get started in building?
Yeah, I can take that one. So, I think a huge focus for us is we've been seeing people build some just, I mean, really complex, increasingly complex applications, right?
We've seen, you know, folks at the Major League Baseball Association use things like Drupal objects for what they sort of like a second screen experience, right?
Being able to sort of track other stats about the game and 3D views and other camera views.
We've seen people build, you know, multiplayer applications in the sense of, like, collaborative drawing applications.
It has been a lot of really interesting stuff, right?
And so, the complexity and, like, of what you can build on Cloudflare workers has continued to increase.
But one of the really important parts is, like, well, we can't just kind of stop there, right?
And so, what other things do developers need to build, you know, and certainly an overused term, but fundamentally, like, full stack applications, right?
And so, that kind of ties into the stuff that sort of Telsa has been just talking about, right, around sort of browser.
Like, you might want to use the rendering API to go and generate thumbnails or screenshots or what, to otherwise pass a rich website and go pull some data back out, right?
It's a super common use case.
Giving folks APIs that are actually really easy to use, not only means you can solve that problem, but actually makes it easier for others to go and solve that problem.
I don't have to be an expert in spinning up containers and having to manage those and manage their life cycles to go and do that.
And we think about what Charlie just talked about with queues, right?
You know, having a fundamentally, like, an async job queue or service for passing messages or for sort of removing this sort of tight balance between services.
Super important, right? And so, again, theme this week is, like, well, how do you sort of make those things easy to use, right?
So, you know, can the concurrency dial up? Do you have better controls over that?
Do you have better controls over how you acknowledge messages?
Again, let's just sort of build these more complex applications. Obviously, D1, giving you a queryable database, has been a huge ask of Cloudflare for years.
You know, you've got KV, you have queues, you have R2, you have durable objects, things like the cache API, which is really powerful, but a lot of folks want a database that they can use SQL against.
And so, D1 fills that gap and, again, lets you kind of build richer applications.
And then, you know, this last part is, like, well, when you're writing code, right, you want to be able to use libraries and packages that you're familiar with.
And so, what Brendan has been speaking about, a lot of massive focus there is, how do we make a lot of the amazing things about the, you know, the workers runtime that we have, how do we still make it compatible with existing sort of the Node.js ecosystem, right?
And a lot of the packages that people are really familiar with, that they know how to develop around, that maybe don't exist yet in sort of the broader service environment, right?
That, again, is that part. If you have to go and invent your own libraries, if you're going to rewrite this code from scratch every single time, right, just makes it harder to build complex applications, because you are spending more time writing utilities and tooling and libraries than actually building the app you're actually meant to be.
And so, again, just adding a big theme for us, not just this week, but I think really going forward is, like, how do we just make it easy for you to go and build these rich applications?
So, again, messaging services, storage, API compatibility, you know, obviously databases and data storage.
And then a lot of the really interesting stuff, I think, like the browser rendering API is just, again, a good example of that, of, like, what are the other things that people are asking for that's not data, not an API, but is an interesting service that we can run for you, so you're not having to run all this infrastructure, right?
If I think serverless, I'm thinking not just the app itself or function as a service, but I'm thinking, like, what are all the other things that I would typically be running as an organization?
I don't want to have to run any of them.
I don't want to have to think about auto scaling or spinning up an EC2 instance.
So, yeah, I think just a big, big focus for us on the developer platform is solving all those issues.
All right. So, Brandon, Charlie, anything you guys like to add?
Yeah, I mean, I think so much of this is about removing undifferentiated heavy lifting.
Like, that's the framing I like to use is, like, you know, when you're building software, there's so much extra work that happens in just configuring things, setting things up, figuring out what other services are out there, having to re-implement something that you know some other engineer out there has already done.
And, you know, the theme, one of the themes here I see is that we're trying to remove a lot of those barriers and say, like, we're just going to handle that as much as we can for you.
And then in the case of, you know, interoperability with the rest of the ecosystem, like, help you really get more leverage out of what other people are doing or have done already in the open source community.
And so much of this is, like, I see is how we can give you that leverage so that you can do more with fewer people.
You can focus more time on the ideas that you have, the products that you're building, and less time just dealing with all this stuff of configuring infrastructure that, I don't know, all of us who have worked on software have had to spend so much time doing, and it's never fun.
I would advise people to read a couple of blogs we did on how we built products like Radar or Wildebeest, and you can see that they're complex applications with multiple systems, including databases, storage, and we kind of explain all the steps on how you can actually build these kinds of applications just on top of Cloudflare APIs.
And it's very powerful because now, as Matt was saying, we have a full stack.
We have databases, storage, queuing, cron jobs, AI inference. I mean, it's a pretty complete stack.
And once you learn to use the workers as an alternative to traditional computing, it's very powerful and very fast to just build applications on top of us.
I think the main point, too, is that we're all developers, right?
We've all used a terrible SDK. We've all used an API that we hated.
So, we can empathize. We want to make tools that developers can feel work just right.
They feel just right when you use it. It's a great experience. It's not something that you have to struggle against.
And I think at Cloudflare, one of the things we really take pride in is making services that are intuitive, they're easy, they're understandable, and they get the job done in unique ways.
Great. Thank you, guys. Again, we roughly have about 30 seconds left for our segment.
Truly appreciate you guys joining us today on what launched for Developer Week, our final day of Developer Week.
Again, thank you, guys. Be sure to check out our Discord developer community, as well as our Cloudflare.com backslash developer-week for all of the announcements we announced this week.
Again, thank you, guys.
Truly appreciate it.