💻 Developer Week: Remove friction and obstacles for developers
Presented by: Samuel Macleod, Adam Murray, Brendan Coll
Originally aired on December 19, 2023 @ 11:30 AM - 12:00 PM EST
Welcome to Cloudflare Developer Week 2023!
Cloudflare Developer Week May 15-19, our week-long series of new product announcements and events dedicated to enhancing the developer experience to fuel productivity!
Tune in all week for more news, announcements, and thought-provoking discussions!
Read the blog post:
- A whole new Quick Edit in Cloudflare Workers
- Improved local development with wrangler and workerd, Developer Week
Visit the Developer Week Hub for every announcement and CFTV episode — check back all week for more!
English
Developer Week
Transcript (Beta)
Hi there, my name is Adam Murray and welcome to Developer Week 2023. I am joined by Samuel and Brendan who are with the Workers Dev Prod team and I'll let them actually introduce themselves.
So hi, I'm Brendan. I'm an assistant engineer at Cloudflare.
I primarily work on Miniflare, which is the local development environment for workers.
So that allows you to take your code and run it on your computer and it sort of simulates what it would run like if it was deployed to the Cloudflare platform.
Hi, I'm Samuel. I'm also an assistant engineer at Cloudflare. I primarily work on Wrangler and also the Cloudflare dashboard and recently I've been working on making the Cloudflare dashboard in browser editor kind of better.
Yeah, so here we're going to talk about a couple of announcements around specifically removing friction and obstacles for developers.
Those announcements came out today on our blog.
You can check those out. Specifically, the first one is a Wrangler v3 announcement, which is going to focus on Wrangler dev.
We're running local first.
We'll talk about that in a minute. And then the second announcement that we're going to focus on in this specific segment is a brand new quick edit in workers and running on VS Code for the web.
So Brendan, let's go ahead and dive into the improved local development with Wrangler and WorkerD.
I want to remind everybody, if you're watching the segment, you can submit questions for Q&A at the end.
So please go ahead and submit those. But let's talk about what actually released today.
What is Wrangler v3? Why do people care about this?
What's good about it? Yeah, of course. So previously with Wrangler, in Wrangler 2, we had two modes for development.
We had this remote mode and we had this local mode.
The remote mode was the default. And whenever you ran dev with that mode, it would take your code and upload it to the Cloudflare network and then proxy all your requests to a runner on the Cloudflare network.
We also had this local mode, which ran using Miniflare 2, which was essentially like a reimplementation of the worker's runtime on node.
So that was pretty nice because it meant that you didn't have to deploy to the Cloudflare network.
You didn't have to wait for your script to be uploaded.
You could use some non-production resources. You weren't being billed for all of your resource use, all of that stuff.
The problem with the local mode was that because it was sort of a reimplementation of the runtime on node, we had quite a few sort of behavior mismatches with the actual runtime.
So that would mean that when you were running your stuff locally, it might work and then you go to deploy it and then suddenly you'd see this error, which is kind of hard to debug because you don't know what's going on.
You haven't seen it before.
So that wasn't great. What we're doing with Wrangler 3, and more specifically Miniflare 3, is we're using the newly open-sourced worker's runtime, WorkerD.
So we are using the exact same runtime, more or less, that's running on the Cloudflare network.
But we're taking that and we're bundling it up locally. We're building simulators for the rest of the developer platform products, implementing that with the runtime.
So we give a much more accurate simulation locally.
And because the simulation is much more accurate and we have much more confidence in its ability to replicate the production environment, we're making that the default in Wrangler 3.
So when you run Wrangler Dev today, you'll actually get the local mode by default.
If you want the old remote mode, you can do dash dash remote instead.
But we think most users will want to stick with the local one now. Yeah, that's awesome.
So just to recap right now, one of the biggest things is this parity between local and production.
So right now, we're really wanting to deliver an experience that is just workers development wherever you're developing workers.
What you see on local is what you get on production. And we expect that to give you faster feedback loops, a greater debugging experience.
You're not going to have these kind of weird errors that don't show up locally, but then they do show up when you publish your production.
That confidence is something that we really want to stress and we really want to deliver for for developers across the board.
The second thing, and Brennan, you mentioned this too, but is the cost savings that you get by not running remote resources in your local development cycles.
So you're actually going to be going to be saving not only those resources running, but then the costs that would be associated with some of those as you're doing rapid development, you're doing a lot of development.
Because we know, honestly, I think developers like we're hitting those things like all the time, right?
Like we're just constantly, constantly doing that. So, yeah. Do you want to, can you talk about speed a little bit and performance?
I know that was another thing in the blog post that people read it, but what was, what was something that we found with, with the speed?
Yeah, definitely. So with the, with the old remote mode, when you were developing, every time you changed your script, you were uploading that script to the workers runtime.
And then it was having to like, essentially like reload your script and all that stuff.
With the local mode, your, the script is never leaving your computer.
Like it's just, it's like when you run like a node script, you do like node file .mjs.
It's essentially the same thing, but like workers instead of node.
So that means that we were able to achieve sort of a 10x reduction to startup times for Wrangler.
And the big thing was whenever you reload your script, it's 60 times faster now, when compared to like the old remote implementation.
60 times faster? Yeah. Which is, which is pretty significant.
So that should greatly improve developer velocity and just make the whole development experience for workers much more fun.
Yeah, that's, that's awesome.
Can you, can you talk a little bit about, you know, how do you build a system like this?
Right? Like you, you, you know, in the previous implementation of v2, we had, it was a node implementation, but now we're using the actual runtime, the actual workers runtime.
So how do you go about to begin to build a system like this?
Yeah. So the, the nice thing is that we had Minifly 2 to kind of take inspiration from and steal some of the code from.
So we'd already done lots of the hard work of getting the rest of the developer platforms running locally.
So while we have the workers runtime open sourced, things like KVE and R2 and all of those are still sort of like proprietary implementations that we have internally at Cloudflare and we haven't open sourced those.
So we still need to simulate those, which we basically like copied the Minifly 2 implementations, tidied them up a bit, and then linked them up with the workers runtime.
The other big thing with the workers runtime is originally it was only really designed to run on Linux because that's what we use at Cloudflare.
But for local development, we kind of run a run on whatever platform our users are using.
So we had to port the workers runtime to MacOS and also to Windows.
Windows was much harder because MacOS and Linux are slightly more similar, but we got there.
So the workers runtime now runs on Linux, MacOS and Windows and on Windows without Docker or WSL or anything like that, which is really...
That's a huge win for our users. Yeah, exactly.
One other thing we did with Minifly 3, which was quite important, I think, is Minifly 2 used to have this storage system because Minifly 2's job at the end of the day is essentially to act as a simulator for all of these data storage products.
So we have all of the KV, R2, D1. If you think about that, they're all fundamentally just different databases.
So Minifly 2 had this common key value store storage abstraction that we used to implement all of these.
What was really nice about it is that it mapped all of your keys to directories and files on the disk.
So you could use your IDs or tree view for files to inspect all your data and see it grouped into namespaces and stuff.
The problem with that approach is it led to some issues where you couldn't store every key that you wanted to.
So if you had a direct like a key like A slash B, it would create a directory for A, which meant that you couldn't then store a key A on its own, which was like...
It was probably not worth the trade-off of having that nice inspectability.
What we did instead is we have this new storage system with Minifly 3 where we use SQLite database for storing all of this metadata and stuff.
And then we have a separate blob store for large values and cached values and responses and all those R2 objects that you upload.
It's a little bit more opaque, so it's slightly more difficult to debug what's going on just by looking at the files on disk.
But we think we might be able to make that a little bit easier with some ID extensions in the future that we're thinking about.
So hopefully that will get easier. But I think the trade-off is definitely worth it because it means that you can now store every key that you would actually be able to store in the production runtime.
You can store much larger values because we're streaming everything now.
We're not buffering everything into memory and all of that.
And there are a bunch of other improvements, but there's a GitHub discussion link from the blog post, I think, that includes a lot more details and all of this stuff.
Yeah, that's awesome.
Can you also tell us a little bit more about MiniFlare 2 to MiniFlare 3? So we've talked Wrangler 2 to Wrangler 3, and that's a major version.
But MiniFlare 2 to MiniFlare 3 is also a major version.
Can you talk about maybe people that were using MiniFlare 2 previously?
Are there things they need to do to get to MiniFlare 3?
What are maybe some differences that exist right now that we're working on? So I think the biggest difference for most users is that MiniFlare 3 no longer includes a CLI by itself.
What we'd really like people to do is start moving over to Wrangler and telling us ways that we can improve Wrangler so that they feel comfortable using that.
We know there are lots of people that still use MiniFlare on its own, just like running workers, but we think we have something good with Wrangler and we'd like to make it better.
So please migrate and let us know what you think. And if there are things we can do to improve Wrangler, please let us know because we do want to improve it and we do want to help you.
But other than that, there have been some API changes.
There are certain things that are quite difficult for us to support in MiniFlare 3 because we're running the runtime in a separate process.
So before in MiniFlare, we had this ability to inject arbitrary JavaScript objects into the sandbox so you could put an object reference in your MiniFlare options and it copied through to the workers environment.
We can't really do that when the runtime is running in a separate process because JavaScript references don't really transfer across processes.
So stuff like that is not really possible, but it would be nice to keep it.
There are maybe some ways, some hacks we could do, but we think there are alternatives you can use to still enable that kind of stuff where you transfer data between node and workers.
So there are ways around it.
But, yeah. And there's also a migration guide they can go to in the MiniFlare.
There is a migration guide. Yep. Yep. Cool. Would it be possible to show a demo of just kind of maybe what it looked like before and maybe one of the things that you, maybe an area you wouldn't have seen previously, but now that you're seeing?
Yeah, of course. Let's see if this works. So fingers crossed you can see my screen.
Yep. Cool. So what I've got here, let's see if I can move that. There we go.
Okay. So I have this worker here that makes a request to a network API.
And then let's say this fetch takes a few seconds and I want to cache it. What I'm doing here is I'm storing the promise I get back from fetch.
And then on subsequent requests, I'm reusing that promise and cloning the response.
Um, so like in theory, this should kind of work because like, um, I'm storing the response and I'm like cloning the response.
Um, and I'm, I'm kind of doing all the right things.
Um, and in MiniFlare 2 and like Wrangler 2 with local mode, this worked because like you, you could, oh, this is that, sorry, that's the wrong command.
Let's, uh, ignore that briefly.
Um, let's try... It's a demo. It's fine. Let's try that one instead.
There we go. Yeah, there we go. Okay. Example.com. So, um, I can, I can refresh this as much as I want.
I get the example response. Um, and this is like caching it as like a JavaScript variable.
Um, the problem is if you try to deploy this code to production, it would not work.
Um, because when you have a, like a response created in one worker request, you can't then reuse that in another worker request.
Um, this is kind of just like a limitation of the runtime. It helps us improve performance.
Um, but the problem with that restriction is it's very hard for us to emulate it in like just purely JavaScript.
Um, and there are lots of these like kind of small minor things that, um, worked in MiniFlare 2, but won't work when you deploy, um, and are really annoying to debug.
So with MiniFlare 3, because we're using the actual runtime, we can just, we don't have to do any additional work.
We get all of these like edge cases for us. So if we make this request, we see it the first time, but if we make another request, we see this like error, which is like, oh, can it perform IO on behalf of a different request?
This is the exact same error you'd see when you deployed the worker.
And you're seeing it much, much earlier in the development process.
So, you know, okay, right. I know what I've done wrong.
Um, and indeed, if you scroll down this error page, you'll see the line that's problematic.
Um, and so this makes it much, much easier to debug and like just have a much more fun experience.
Um, the other advantage of using the runtime is that we use the same version of V8 that the deployed runtime users.
Um, so previously when we were re -implementing workers on node, we were using whatever like V8 version you're currently installed node version had, um, which could be quite out of date when compared to the workers runtime, because we found a very, like very recent version of V8.
Um, so this means like modern JavaScript features, say like find last, for instance, that were only added recently.
Um, if you tried to run those locally, oops, let's do that. Um, if you tried to run those in local mode before, uh, you'd be like, oh, find last does not function.
Um, unless you had like a very new version of node, which has this feature embedded.
Um, whereas now if you, uh, if you run that one, um, if you run the new Wrangler with the new MiniFlare 3 and WorkerD runtime, um, you'll get exactly the same version that we're using, which has these new JavaScript features.
Um, so that's no longer a problem.
That's awesome. Thanks for showing that. That's, those are two great demos.
Um, so yeah, so I just want to encourage everyone, um, if, you know, to learn more about this, please go look at the blog posts that got released today, improve local development with Wrangler and WorkerD.
Um, you can take a look at, um, Wrangler v3, go to discord.
Um, there's all sorts of different ways you can engage, um, and let us know, let us know what you think.
And if there's any issues, let us know those too.
Um, okay. So let me shift now, um, to Samuel.
Samuel, um, we released a new quick editor and, um, can you please tell us more about that and what, how it was different than what was there before?
Yeah, definitely.
So previously in the dashboard, we had a way to edit workers. So you could like go to a page, um, see your deployed worker code, make edits, and then redeploy it.
That was kind of a very simple text editor. It was based on Monaco, which is the editor that VS Code uses.
Um, but it was just kind of like a single file text editor, um, and didn't have much customization.
You couldn't have multiple files.
You couldn't have multiple modules. Um, and yeah, that was kind of pretty much it.
So what we've done instead is we've embedded VS Code for web into the dashboard.
So this is a much kind of more fully fledged development experience. It's something that users are very used to.
Um, most of our users are using VS Code locally, um, using VS Code in the web is very seamless switch.
Um, it lets us support things like multiple files, multiple modules.
It lets us do better type checking of your code.
We don't support TypeScript. We do support types within JavaScript with js.comments.
Um, so you can still type check your code. You can get, you know, useful error messages.
Um, and we have also upgraded the preview in the dashboard.
So previously it was using a kind of stripped down preview that worked in the dashboard and in our unauthenticated preview service, cloudflowworkers.com.
Um, but now what we're doing is we're actually taking your worker and we're running it on the edge.
So it's still previewed only you can see it, but it's still, but it's running like on the actual edge environment in the same way that Wrangler dev remote.
Um, so again, what you see is what you get. What you see is what you get.
Exactly. It's all about like making your code run as it would, when it's deployed, um, and you catching errors much earlier.
But again, as I've learned, it's not actually deployed yet.
Just a preview. Just a preview.
Yes. Um, okay.
So like, would you mind walking us through a demo? Let's take a look and see, um, kind of how do we get there?
What does it look like? I think in the demo too, we're going to see a little bit of our new UI.
One of the other announcements that we had today is the convergence of pages and workers.
Um, and so that's been another thing our team's been heavily involved in is, is, is that work?
Yeah, I can show off that.
Let me start sharing my screen. Um, hopefully you can see my screen.
Yep. Excellent. Okay. So this is our new converge pages, workers overview page.
So this kind of is a list of all of your pages, functions, pages, projects, sorry.
And all of your workers projects. Um, this is not the point of this, but I'll just, you know, you can search.
No, but no, but we should show it. Yep. Workers that search is now a server side.
It was previously client side. So that's going to be a lot faster.
Um, you can filter by pages. You can just see your pages projects.
I don't have any that match that search, but I do that don't match that search.
Um, you can see all your workers projects. You can start by name by last modified, um, kind of all the features you'd expect of a, of a page that lists your projects.
Um, so if I want to create an application, I can choose between pages and workers.
Um, I'm going to go through workers right now, but pages as well.
You know, if you want to have like a full stack application pages is a good bet.
Now you can create a worker here, which will give you a kind of bare bones worker with just a hello world template, or you can go through some of these templates.
I'm going to go to workers KB to do, which is our kind of most fully, most fully featured template.
This will also give you a KB namespace for your worker.
Um, so I'll show you what code is going to be deployed. Um, so these are, it's just an example worker that shows you like a to-do list.
Um, you can change the name before it gets deployed and then I'll deploy that and you'll see the button loading.
And just like that, your worker is deployed to all of cloud search locations.
Um, you'll see this map, the contrast is a bit iffy in dark mode. Um, but if you zoom in, you can see every single one of cloud search locations that this worker is deployed to, which is really cool.
I mean, that's seconds and your application is all over the world.
So if I want to modify this worker now, I can click the edit code button and I'm taken straight to the quick editor, which will load up your worker code and you can just, you can see it start editing.
And then on the right, right here, we've got a preview, which shows you the preview.
Um, so this page is quite a, there's a lot of updates here from what it used to be.
Um, you can see on the left here, this, this huge pain, this is VS code for web.
This used to be just a text editor.
Now it's, you know, a fully fledged ID. Um, so I can, you know, I can see my code.
I can have multiple files. I've got an HTML template here, for instance.
Um, this is an HTML file that's imported by my index file.
Um, I can have multiple JavaScript files. Now type checking is something that's really cool.
So you can see here that right now, if I hover over this request, you can see that that's typed as any, but we know that that's typed as a request.
That's a, the input to a worker. So if I give it some type annotations with JS doc, I can say request.
Now that request there, it comes from the Cloudflare Workers type.
So we've preloaded the type environment here, um, with the runtime types for Cloudflare workers, we generate these from the open source runtime.
And so they're always accurate to like what's actually deployed. Um, so now if I go and do request .something, I can see I'm getting all the right completions for like, is this an array buffer, you know, get the body.
Um, I'm also getting a syntax error because request.is not valid JavaScript syntax.
Um, but yeah.
So let's get the URL for instance. And you'll get the URL constructor refilled.
Excellent.
And again, it updates as the preview updates as you type. So sometimes you'll get syntax errors, um, as you're typing, just, just keep typing and it'll, it'll work itself out.
Um, that's what I do when I develop. Yeah, exactly. Just keep typing.
What I'm looking down there too, just, and I know that you, you, you have got the rest of a demo, but I'm also looking down here at the dev tools.
Can you talk about those?
I know we've, we've added the, some updates to that recently as well.
Yeah. Let me try and zoom into that. Uh, I can't appear to zoom into that.
Can I zoom in? Does that help potentially? Maybe. Maybe. Okay. Yeah. So this is our kind of newish dev tools.
This is a fork of Chrome dev tools, the front end that Chrome uses for its dev tools in the Chromium browser.
Um, we forked it and we've added, you know, 10 or so patches that adds Cloudflare specific functionality.
Um, and so what this lets you do is while you're running a preview session, you can see logs from your worker.
Um, you can see any network calls your worker is making.
You can see the compiled source, which is what is actually deployed on the Cloudflare network.
Um, so you can see here that it's, you know, this is the file that you've got.
And I mean, it's not very helpful in, in browser, to be fair, because we're not actually doing these source transformations.
Um, but locally, this is much more helpful because you can see source maps and things like that.
But if I go here, you can see console, you can do memory profiling and in Chrome, you can do memory profiling, not in Firefox.
Um, that will be supported fairly soon.
Uh, Firefox should have a release. We're hoping very soon that we'll fix that.
Um, but you can do CPU profiling. So if I press start CPU profiling and I'm, I start recording.
Um, if I send lots and lots of requests to my worker and try and overload it a bit, sometimes when you send one request, it doesn't even register because it's so fast to process these requests.
Uh, so hopefully this should be enough.
And if I stop, yeah, you can see a very helpful CPU trace.
Um, so, uh, let's see, can we recognize anything? Potentially not.
This requires more thought, but you can go through that and you can, you can map it to your functions.
So this, like this function here is get to dues.
Um, this is this function here. You can, you can map it to like your actual functions, your worker, and you can figure out where the hotspots are, where the, where the slowdowns are and trying to figure out how to optimize your worker code.
Yeah.
So here's the preview. Here's the answer. And this is, this is what it looks like.
Nice. You can also, um, from, from in here, um, you can create new files as well, right?
Yeah, definitely. So this is VS code, right? Yeah, that's the thing. So this is, you know, the full VS code experience.
Um, so you can create new files.
Um, yeah, but like export stuff, it's like, it's, it's essentially VS code. Yeah.
Yep. Uh huh. Yeah, definitely. So you can do export const, uh, and then an index if I import that.
So what did I call it? Import. Now what's, what's interesting here is you can hover over that and you'll see the type that it's hello.
Now this is, this is really interesting because currently VS code for web doesn't support, uh, type checking across files, but we've made some modifications to the TypeScript extension in VS code to allow that on the web.
Um, and so this is really nice.
You can have functions that are exported from one file and important to another file and type check it will work.
Nice. The other thing in here too, that we added was pretty error pages.
Um, could you show a demo of that?
Definitely. I'm forgetting all the features. No, that's fine. That's why we're all on the call.
Yeah. So if, if by chance you really wanted to throw an error in your worker and you just didn't want anyone to get any useful information out of it, you could throw.
I do that right to the top of your worker. I don't recommend this in production code, but you can do that.
And once this is refreshed, so the preview will kind of load every second or so.
Um, yeah. So you can see it's throwing an error.
You get a very nice error page saying this is the error that message that was thrown.
Um, right now that's the URL cause that's the message that we we've thrown and you can see exactly where that is in your code.
Um, that's line 14 that maps to where we line 14.
Um, and you can see the kind of call stack. Um, yeah. So that's, that's, that's really helpful for debugging.
It's interestingly, exactly the same error message you'll see locally, uh, with mini flare.
Um, so we're, we're kind of unifying that like experience across local and remote element.
That's great. One of the things too, you don't have to deploy this, but we, you'll notice we moved the save and deploy button.
So if you've been using the quick editor for awhile, you've probably gotten used to it at the bottom of the editor.
We've moved that to the top right now to give it a little more visible space for the editor.
Uh, so I can, I can save and deploy.
Okay. We'll take seconds and it's deployed. And then if I take this and then open a new tab, open it and it will throw an exception because I did add an error statement right at the that out.
Um, uh, and I save and deploy, which will also just take a second and reload this worker and I have my worker, which is, you know, that's like that seconds and you're deployed globally.
Yep. We just have about three minutes left, but, um, is there anything, you know, in that time quickly that you can talk to about the internals?
You know, I mean, we, we use workers a lot here. Um, and I know we use them in this, in this as well.
Um, but, uh, anything else that you can talk to besides VS code for the web as to how you built this?
Yeah, definitely. So, I mean, the major chunk is VS code for the web and we've embedded it in the dashboard and we're talking to it from the dashboard and sending files back and forth and things like that.
Um, but there's also a fair few working involved. So the pretty error page, for instance, that is a worker.
Um, we send a worker details of the site trace that formats as an error page and then sends it back to the browser.
That's how we get that. Um, and the edge preview as well. So what we've done to kind of make workers run on the edge, um, is we use a worker too.
So we, the dashboard sends you your worker code to a worker that we've written, which then sends it to the workers runtime.
Um, lots of workers involved. A lot of workers. Yeah.
But what's, what's really nice about workers is it's really easy to spin up things like that.
You know, we need a simple beast piece of server-side logic, um, and we can.
So I think there's like three, two, three workers involved in this kind of whole process.
Um, yeah. So that's, that's kind of how it works. What we're exploring kind of going forward is seeing if we can take these workers that we're using for dashboard development and running and seeing if we can run them locally in worker D and actually implement local mode with workers as well.
Um, which could be kind of cool, but that's definitely all future stuff.
Yeah. There's a lot, there's a, there's a lot that this opens the door to for the future.
Yeah. Like previously, like on like many flare, all of like the, uh, the datastore simulators are written in like no JS, but again, we could, uh, we can maybe consider writing those as workers, like as durable objects.
So you can imagine like your KB namespace is locally implemented as a durable object.
Um, which would be kind of cool.
So workers everywhere. Workers everywhere. Yeah. Great. Cool. Well, um, thanks for tuning in, uh, Brendan, Samuel, thanks for the demos.
Thanks for talking through these announcements.
Um, developers, we really hope that this reduces friction.
We hope that this, um, opens more doors for you to continue developing with workers.
And again, please reach out to us on discord. Please reach out to us in the workers SDK repo.
Um, please let us know if there's, um, any issues you run into or, um, improvements that you see that we can make to this.
Like, like we've said before, we are really committed to the developer experience and to increasing velocity and increasing your ability to use workers.
So thanks so much.
Thank you very much. Bye. Thank you.