💻 What Launched Today 2022
Presented by: Brendan Coll, Dawn Parzych, Jon Levine, Tanushree Sharma, Kabir Sikand
Subscribe to Developer Week
Start at Â
Originally aired on November 18, 2022 @ 12:00 PM - 12:30 PM EST
Join Cloudflare Director of Product Marketing Dawn Parzych, Systems Engineer Brendan Coll, Director of Product Management Jon Levine, Product Manager Kabir Sikand, and Product Manager Tanushree Sharma to learn more about what launched today for Developer Week.
Read the blog post:
- How Cloudflare instruments services using Workers Analytics Engine
- Doubling down on local development with Workers: Miniflare meets workerd
- Improving Workers TypeScript support: accuracy, ergonomics and interoperability
- Send Cloudflare Workers logs to a destination of your choice with Workers Trace Events Logpush
Visit the Developer Week Hub for every announcement and CFTV episode — check back all week for more!
English
2022
Developer Week
Transcript (Beta)
Hello and welcome to what launch today during developer week. I am Dawn Parzych.
I'm the director of product marketing for the Developer Platform and this is our final segment on what launched today.
We're wrapping things up with a talk about observability analytics and how we're making the platform easier and more available for people to use.
So I'm going to ask everyone to introduce themselves quickly and share what they're here to talk about today.
Kabeer, let's start with you.
Yeah.
Thanks for the intro. So I'm KPRC on our product manager on the Workers team.
You may have seen me before this week, but today I'm here to talk about the limits that we have increased.
We like to increase limits every developer week and make sure that there are none in the future.
So we'll talk a little bit about that.
Excellent.
Tanushree. Hello, everyone.
My name is Tanushree. I cannot wait.
I cannot believe that developer week is over.
But I think we've saved the best for last.
Here.
I'm here. I'm a product manager on the Workers team, and I'm here to talk about Workers Trace events.
Blog push. John.
Everyone.
I'm Jon Levine or JPL project manager here at Cloudflare. I work on our logs and analytics products, including Analytics Engine.
I would say we save the best for last, but we've had my teams had great announcements throughout the week, so we've saved the best for last and second and third.
It was all good. - Yeah, so...
And Brendan.
Hi, I'm Brendan.
I'm a systems engineer on the Workers team, primarily focusing on local development experiences.
So I'm actually announcing two things today.
So we've got the new version of Workers types, which is our automated type generated or auto generated typescript bindings for Workers.
And we also have mini flat three, which is the next major version of mini Flow, which is our fully local Cloudflare Workers emulator.
So excellent.
Great.
So we're going to circle back to you. So we said we've removed limits.
Like what limits did we remove?
Why did you remove them?
What does this allow people to do?
Yeah, So that's a great question.
I think a good way to kind of frame this is we've had a lot of use cases that work really, really well on Workers over the years and some of the products and features that we've built over the past year specifically help you bring more and more of your full applications onto the work or spot for things like any anytime your script is underneath a megabyte or you only have like 100 scripts, that was fine.
That still is very powerful If you just break things up and really think about how you optimize things.
You can take full applications, bring them onto our platform and have them running in hours.
But what we noticed is that if you're just trying our platform out and you really want to just see the power of how Workers scales really, really easily, or if you're using any sort of webassembly libraries or really any kind of major awesome use case, it doesn't necessarily fit under one megabyte.
And we just want you to be able to pop into the platform, take something that you've maybe put on some of your other systems that you've built in the past and just upload it to Workers and it just should magically work and you can worry about optimizing it or segmenting it across your teams later.
We just want it to work within seconds.
And so that's what we did this week.
We increased the limit from one megabyte to five megabytes and that's after compression.
So that means there's really, really large file sizes that you can you can put up on our edge on our network.
And we also increase the number of scripts that you can build to 500.
And so that means you can have massive microservice architectures, you can have tens of developers uploading tens of scripts to Cloudflare's global network.
And really, we think of that as like a starting point.
So when we have these limits in place, historically it was kind of the cap of where you were going to go.
Really, these are these are kind of soft limits now.
So we have a form you can fill out.
You can just go to the limits page on developers dot Cloudflare dot com.
There's a little form there and you can just ping us and ask, Hey, could you increase this or that limit for me?
And we'll we'll get it removed for you.
So if you need more than 500 workers because you have a ton of little micro front ends for all of your applications or you have many, many teams working and building on our platform, we're more than happy to do that.
We also, as a part of this announcement like to highlight that if you are using hundreds of workers and you're doing this in a way where you are customizing this on a on a per customer basis, we think we have other solutions like workers for platforms that might work really well for you too.
That helps you kind of orchestrate and reason about having hundreds or thousands or even millions of workers for your customers.
So a lot of really exciting things.
But kind of the big takeaway for me is you can build really complex microservice architectures or micro friends and you could do it all in awesome.
So any sort of webassembly dependencies that you want to bring in, you can bring in.
Now one of our engineers put a s build type of system on Workers, but it was with SWC.
I'm not actually personally familiar with it, but it's a rust based library that allows you to do to build on the edge and you can actually just check it out.
We might tweet the link out.
I'll probably tweet the link out later today so you can find me on Twitter and and check it out.
It's live on Cloudflare's edge. Just seconds to upload.
Excellent.
Very cool. And I can imagine as these limits are being removed and customers are putting hundreds of worker scripts, being able to track what those are doing and exceptions in them and errors and them becomes.
A little bit more complex to announce kind of plays into some of that.
That's a great Segway.
Yeah.
Yeah.
Thanks, Todd, for the intro there.
Yeah.
So with our customers building more and more Workers and more and more capabilities that we're unlocking for them or workloads that are being shifted over to us.
One of the big pain points that we've been hearing a lot is observability.
And now this is hard in general for any serverless technology.
You can no longer ssh into your own servers and pull logs directly from there.
Nor.
Should you have to ever do that.
But we're here to make that easier and to make that a better experience for customers.
With serverless, you get perks, like you're able to deploy an application within minutes to even seconds.
If you've ever used Workers Unbound you type that into your browser.
You can get a really simple HTTP handler built in seconds deployed globally.
But the hard part to that is observability.
And so that's where we're coming in.
Over the years, we've been building tools to help with this.
We've made Wrangler dev, which is our local developer environment mini flare.
Who Brendan actually on the call here is actually created.
I might give him Brennan If you want to just take a minute to talk about it and share some of the observability perks that come with using that.
I'll let you plug that.
Yeah, sure.
So, so today what we're launching is the new version of Mayflower. So many flowers are fully local Cloudflare Workers simulator.
So traditionally, Wrangler dev, whenever you make changes to a file, it would upload that file to cloudflare's, edge and proxy all of the requests from your machine to Cloudflare's Edge network or Cloudflare Super Cloud.
And.
What we do with many flower instead is we read, we rewrote all of the Workers runtime APIs in dogs instead so that you can run your scripts completely locally on your machine.
And what this gives you is it gives you much, much easier access to things like Step Through Debugger, which is really helpful for observing your code.
And we also added a couple extra things like detailed console logging and things like that was just really help you see what's going on with your code.
And because it's local, like it's very easy for us to reload those scripts really quickly.
And yeah, it was really just improving the developer experience. That's awesome.
Yeah. And then another tool along those same lines that helps with observability is Workers DX, which is our open source Workers runtime.
So it's probably the closest you can get to running Workers without actually running on Cloudflare network.
It sort of bridges a gap between cases where something is very close to our actual runtime, but it isn't exactly that.
And then you notice bugs in production, so that's another tool that's in our customers hands.
And then the last one I'll mention here is Wrangler Tail, which is our live tail logging tool.
So any Workers invocations that come in, you can see them happening in real time.
Both.
There's this one on the Cloudflare dashboard for this as well as through our CLI.
But one of the biggest things we've heard from customers is that they want to be able to store these logs.
They want to set up monitoring, they want to look back on errors that have happened as well.
So that's where log search comes in.
Log push is our product for getting logs from the edge to a customer's configured destination.
And we have log push for other products today. Our biggest, most popular one is HTTP logs.
We also have firewall logs, audit logs, things like that.
So today we're adding Workers to that set portfolio.
So Workers log pushes in as of today and if you're paid customer.
So that's on the Workers paid plan or on an enterprise plan, you get access to it.
It's super useful because it shows essentially all of the output of Wrangler that tail Wrangler tail is can be sent through log push so it'll show metadata about a request or so the status code the URL that was requested.
It'll show console.log messages that you have in your code as well as any uncaught exceptions.
And we're looking to add things to that as well. So as you start using it, if you have feedback, please do let us know what else you're looking for around observability.
What else do you want to see in log push?
You did some analytics announcements as well and logging and how we're using them internally.
Can you. Yeah.
Add some more details around those.
Yes it was a big week for, for observability.
So maybe just a quick recap.
Earlier in the week we talked about a new product we announced called Logs Engine.
So the idea of logs engine is if you are producing those logs you should just mentioned, well, what do you actually do with them?
Logs engine gives you a way to store them using our two.
And we have some very rudimentary APIs right now to retrieve those logs, but we're going to be building that out and kind of add the ability for you to actually query those logs and do stuff with them on Cloudflare itself, which we're really excited about.
So today we released a blog post giving an update on where we're at with Workers Analytics engine.
So you've heard a lot about logs, about inspecting the local runtime analytics engine is it's pretty unique product.
It's like that's, that's, you know, it's always it's always tricky to talk about because it's it's very different from, I think, how a lot of other observability products work.
So what is the analytics? Why do we build analytics engine?
So the basic problem is if you have workers running all over the world and they're all doing stuff, how do you get visibility into what they're doing?
And even more than that, how do you get visibility into what your actual products are doing that are behind the workers?
Right.
And so a Cloudflare, we've been solving this problem for a long time, right?
Because we have a CD and we have a firewall, we have Das, we have all these amazing products that run across our network.
And the basic approach we've taken here is that these products emit events.
So an event is when something happened in the world event is like an interview request came into our network or DNS request or the worker ran or the worker made a sub request.
And rather than try to produce like metrics, we actually just store all the events or we store a sample of the events and then you can query those events and come up with like answer lots of interesting questions.
So you can say, well, how many requests did I get a second ago?
How much data transfer was there, or what were the top post names or your URLs?
And this approach is really cool because unlike with kind of a traditional metrics approach, you're not limited by the cardinality of the data.
So every event can have like unique values on it, like a unique URL, you can have sort of a very high dimensionality if you support something like 40 dimensions on these events, which is really cool.
And we provide this really flexible query layer.
We actually you can just use SQL and you can actually write like SQL queries in a tool like Gryphon and produce these really fast time series charts.
So that's, that's like a super quick preview of analytics engine.
What we announced today is just a bunch of new functionality.
So I think probably most notable is we just continue to develop the SQL API.
Basically written a new implementation of SQL.
It's it's a big undertaking and we've added support for a lot of new cool things you can do there.
And the blog post is kind of telling the story of how we use it's very meta, how use analytics engine to actually instrument analytics engine, which might be a little confusing, but basically the SQL API itself, you know, is getting requests.
People are writing queries. So we use analytics engine to instrument this component of the analytics engine, which is the SQL API service that runs that.
And we talk all about sort of how we did that and what we've learned from doing that, which is quite interesting and sort of product improvements we're making as a result of that.
And then we also announced the changes we made as a result of that to the SQL API.
So it's kind of a cool story. And it's always fascinating to hear how we use our own things to build.
And like that recursion loop, right?
We're using this to build this to learn more about this.
Yes.
And kudos to you for not getting distracted with my interruption. Lovelace adds.
It just adds that, you know, it's real life, you know?
Yeah.
Yeah. Anyways, I was very customer.
customer zero, right?
It's a big part of how we build here.
That's great.
He's more on the design team, so we use a lot of our tools.
We're trying to provide more and more information to our customers to be able to debug and understand what's happening.
And to me, a part of that is some of the typescript work that's done as well.
So Brendan, your other announcement that you made around that, correct?
Yeah.
So we've been talking a lot about sort of keeping track of exceptions and stuff, but this is more about making sure that they don't happen in the first place.
So TypeScript makes it really easy for you to write code that doesn't crash at runtime, so essentially tries to catch type errors before your program runs by type checking ahead of time.
So previously we had this thing called Workers types, which is our auto generated typescript bindings for Workers.
So we launched that about a year ago.
And what we're doing today is we're just completely revamping it and we're improving it loads.
So big things that we're doing for the new version of Workers types is we're improving interoperability with TypeScript own standard types.
So that means that it's possible to use our types with frameworks like Remix that require you to type your files on the browser and also against our own types if you're hosting on Cloudflare.
Cloudflare also has a system for compatibility dates which ensures that we can make breaking changes in a backwards compatible way.
And previously those didn't really work very well with our types, like the types which are some sort of like mish mash of different compatibility dates.
And no one was really sure which ones they represented.
So now what we're doing is we're generating a version of the types for each compatibility date that changes the type, the type surface.
So you can make sure that your types correspond to your compatibility dates, which is nice.
We're also improving the integration with Wrangler as well.
So Wrangler is our Workers CLI and in your Wrangler tomor, which is our configuration file for Wrangler, you have to define all your bindings.
And previously what you had to do is you had to take those findings and write TypeScript declarations for those as well so that you could type check your code against those bindings because TypeScript couldn't understand Wrangler HTML file.
So what we've done now is we've added a new Wrangler command to Wrangler types, which will generate that typescript file for you from your Wrangler.
So you can keep your Wrangler at the top with just a single source of truth.
I'm...
It sounds like we're making huge strides this week with the announcements you all have today, as well as the other announcements we've made during the week on helping developers and proving that developer experience, giving them more tools to make sure things don't break.
Or if they do break, they have the ability to see them and query them and do all of these things.
So questioned all of you like what's next? Like, where do we go from here?
To be here.
I'll start with you. Yeah.
So, I mean, I think there's a lot of things that we obviously can do next. But one of them, for me with regards to just like how we can support more of these use cases on workers is I want to help folks understand and dig into any sort of the optimization problems that they might have to do when they come to a new platform.
I think that it's really important that if you come to a platform like Cloudflare, you can just put some code up and it works, right?
And we can report if we think there are things that you can do that make your script faster or make it a better experience for your users or anything like that.
We can tell you that those things are improvements that you can make, but they're not blockers for you to get started.
I think it's important that you can come in and immediately just get started and get going and focus on building your application.
Because oftentimes those optimization pieces aren't the top of your mind.
The thing that's top of your mind is how do I solve a problem for my customer?
And then once I solve that problem.
I want it to be faster so that they don't go to a competitor or I want it to be faster so that it can reach all of my users across the globe, or I want it to scale a lot better.
And these are things that Cloudflare can solve for you to some extent.
And also we can help you optimize on your site as well.
So I think a lot of the things are going to be around how do we make it so that you can just plug in and go and we can help you guide you along the way to getting to a place where it's in the best state possible.
Tanushree.
I can jump in next year.
Yea. Along the lines of Workers observability.
One of our big goals is to improve a developer's velocity.
So when you run into an error, how much debugging do you have to do?
Are we able to help pinpoint where that's happening or are we able to give you the tools to help do that?
And the announcements that we've made today are a really good first stride and first, first push towards that, but we have a long roadmap ahead of us.
Some of the things that are top of mind to me, at least in the short term for Workers long push specifically is getting some worker requests in their stack traces basically more as much data as we can provide you about your workers that you can access locally should be accessible through blog push as well.
Today we only support fetch requests and and I'd like to expand that out to other types of requests as well.
If you have a cron job that's running, that's throwing an error.
You should be able to get up to log push, see what's going wrong and be able to debug that on in your production traffic stemming from that.
We had a conversation just earlier about tracing as well.
I think that's the that's another big gap that we want to address.
And as we're kind of making strides towards opening things up in workers, that's something that's top of mind for us as well.
It's something that we think about a lot internally and it's something that we want to provide customers so that they can keep up with their standards and add workers observability along with all of the other observability that you might have in your stack.
Get that flowing into that same the same place.
Maybe that's logs engine, maybe that's somewhere else.
But giving customers the option to do that.
This isn't really a what's next question, but I want to ask you.
So traditionally only enterprise customers could use log push.
So what's happening with what you just announced today?
Tell us about that.
Yeah.
Yeah. One of the big things I was as we were building out this feature log push for workers was we have a lot of customers that are developers.
They're building their own.
They're building their first project.
They're hobbyists.
They're building their first startup on workers. And so it's really important to us to open up, blog, push to our workers paid plan as well as a trade off.
And this is what it comes with is we are pricing for workers like Bush.
We wanted to keep pricing as simple as possible.
We need to be able to cover our own costs by providing the service.
So it's priced at $0.05 per million requests.
And if you're on our paid plan, the first 10 million requests are included in the in the $5 that you pay each month.
So we're really excited that this is the first time log Bush has been exposed to non enterprise customers.
And this is a great spot to do that because we have so many developers that aren't enterprises, but they might grow in years.
They might grow to be to be at that level, and they need observability to be able to do that.
I want to give a shout out.
This is not what's next for your product industry, but just in general, that this is a first step for us to have more logs, products accessible to our non enterprise plans.
So other page plans on Cloudflare and we definitely are intending to do that.
So we did announce logs engine. We are going to be bringing that to all paid customers.
And so stay tuned for more details about pricing and access to that.
Excellent.
Brendan, from your side. I'm like, Worker D, Wrangler, Miniflare.
Where to now?
Yeah. So, actually, I just need to make sort of a clarification.
Previously, Miniflare 2 was built on top of Node.js, but Miniflare 3, the thing we're announcing today is actually built on top of Worker D, which is the runtime that the open source runtime.
So this is the same runtime that powers Cloudflare Workers.
And because we're now using exactly the same thing as opposed to reinventing all our work as APIs, we don't get those behavior mismatches.
They're subtle differences between Miniflare and the real Workers runtime, which are really annoying to debug.
So like just gives us a bug debug compatibility and massively simplifies our implementation.
So we removed like over 50,000 lines of code from Miniflare, which is quite a lot.
We're also so with all of this stuff, we're adding a bunch of new stuff as well.
So we've got like local development with real data, which allows you to read and write from real namespaces on the Cloudflare network.
And we're thinking of trying to add that to other things like R2 in the future so that you don't have to mock out that data locally when you're testing for Wrangler, especially, what we're hoping to do soon is make local mode the default for Wrangler.
So rather than developing against the super cloud or whatever, you would now be developing on your local machine by default.
We're probably going to try improve automated testing as well.
So like, Miniflare, if you did testing environments for just an V test.
We want to bring those over to the new Worker D powered system. There are some additional complexities that we're still kind of working out, but hopefully we'll get that soon.
And if we have time, we might even make some extensions for popular IDs as well.
So there's a bunch of stuff that we can do.
On the automated testing front.
Quick shout out to another one of new announcement this week.
For those of you that may have missed it was the browser rendering API.
So our ability to run puppeteer for automated testing through a worker.
So that's something maybe in the future that you'll be able to do locally as well.
Yeah, well, when I saw that announcement, I was like, Oh my gosh, how are we going to implement that locally?
That's going to be really fun. For Your local Browser Isolation control.
I know anecdotally it's it's been requested for a while, actually, I think about six or eight months ago.
So we had some folks write in about what they were doing with larger scripts on Workers.
And one of the common threads we saw was Puppeteer. They wanted to automate their testing or or do something similar, and it's a pretty hefty package size.
I think it's like 60 or 80 megabytes when it's like stripped down to the smallest components.
But it's exciting to see that we can actually do that now on the super cloud.
So it would be excited to see it locally, too. I'm hoping we can just install the install the puppeteer package and try and connect them somehow, but we'll see.
No pressure.
Brendan Yeah, I think Jan and I have been chatting this week of all the announcements and I swear like a day before, like an announcement would be made, somebody on Twitter would say, Hey, it'd be great if I could run puppeteer and worker or I use the log push if I'm not Enterprise, how do they know these things are coming the day before we announce them?
It's so hard for me not to say like cough cough.
Tomorrow I want to.
Write if.
A mouth emoji.
Yeah. Yeah.
And it could be yours.
Like whatever, like scoop dust there. Like right before we did the post, they're like, Why are you talking about this?
And I'm like. We like to give each announcement a little bit of time. Yeah.
Well, with this one, we actually we changed the documentation at the beginning of the week just because some folks were running into the limits.
And I just don't want I just don't want folks to have to run into them at all.
So even though we didn't really drop our tweet about it, if you are following our documentation, you probably saw a few early hints.
So a little inside scoop early on.
Yeah, I think you've heard it before, but like the documentation was updated to see this.
What does this mean? Give us a few days.
We'll tell you what it reads. Yeah, got about 2 minutes left.
So in those last 2 minutes, any parting words?
Something that you're like, Oh, wait, I forgot to say this.
Like, shameless plug for something else.
I have I have one thing that I want to plug.
If you're developing on Workers and you're curious about how the developer workflow and experience might look.
Yesterday we dropped an announcement on deployments and we think it's just the beginning of where we think the developer experience is going to go.
How you might deploy a worker with a large team, with a small team to testing, do things like pioneering and bluegreen like all of these things are really exciting and we do it internally at Cloudflare, so we'd love to hear the blogpost on deployments has a little link at the bottom.
We'd love to if you want to chat or even share thoughts or pain points or anything, just drop us a note there.
We even have a little discord handle input so you can.
We don't have to have a meeting.
We can always just chat over discord.
But that's my that's my little plug in the last few minutes.
That's great.
Yeah, I was going to give a shout out to the Discord where we're all on there, so if you have any specific ideas, suggestions, feel free to other messaging in one of the chat rooms that's appropriate.
Or our messages DMS directly on there.
Yeah, super happy to talk to customers and we want to hear what you're excited about.
And then maybe just a couple of things on how to install these things.
So for Workers types, it's NPM install at Cloudflare forward slash Workers types.
And then for many Flt3, it's integrated into Wrangler.
So you can do Wrangler dev dash, dash, experimental, local.
And that will use many flt3 in marketing.
Thanks JP.
But 20 seconds.
Analytics engine getting better all the time.
Love your feedback and discord.
So many new things coming.
Stay tuned.
Great.
Thank you. I'm today sharing the announcements.
I hope you've all enjoyed the announcements and these launch today segments.
Have a great day.