Cloudflare TV

Introducing Workers Unbound: Unprecedented Access to Power On the Edge

Presented by Jen Vaccaro, Nancy Gao
Originally aired on 

Learn about the newly announced Workers Unbound platform, which allows customers to now build even their most compute-intensive workloads on the edge with Cloudflare Workers.

English
Serverless
Cloudflare Workers

Transcript (Beta)

All right, Nancy, I think we can get started. How are you doing today? Hey, Jen, I'm doing really well.

It's a beautiful Monday. Today's actually also my birthday.

Happy birthday. Wow. So this is this is a very exciting day for you. You have your first product launch with Cloudflare and your birthday.

That is very cool.

Yeah, it's exciting on a couple of fronts. How are you? How are you doing right now?

I'm doing well. Yeah, can't can't complain. Definitely very excited for Serverless Week and all the announcements that we have coming out this week and definitely very excited to to share and talk with you about what we have coming up on your side.

I do want to introduce both of us because this is our first debut on Cloudflare TV.

So I thought we could start out and you could share a little bit about yourself and then I can do the same.

Sounds good. Well, I'm happy to introduce myself.

I am Nancy and I am from Wisconsin. I grew up here and moved out to the Bay Area after graduating with a degree in computer science.

And I started out my career kind of working in consumer tech.

And I came from Google before this.

And I recently just took a couple a pause on that actually to get my MBA, which I just did at Harvard.

And now with coronavirus and everything crazy happening in the world, I'm happy to be working at Cloudflare and resettled in the Bay Area, which is where I think I'll be for the for the long run.

How about you? Can you tell us a bit about yourself and what it's been like to onboard during these like very strange times?

Yeah, yeah. So I'm the new product marketing manager for workers.

I did onboard, as you said, during COVID. And I came from Intel and I was doing product marketing there and some of their server products, power products for server.

So it's pretty exciting to now be in the serverless side of the world and definitely excited to be here and be part of the team.

So why don't we get started talking about our new and exciting announcement that went out today?

And yeah, so why don't you share with us what you've been working on for serverless and what we've announced today?

Yeah, so this is a really exciting time to have joined the workers team.

Workers is in the process of kind of rebranding themselves, as some of you might have seen or heard on the blog post in a press release.

Serverless Week is trying to challenge some of those long held assumptions that people might have about what a serverless technology can be used for and what its limitations are.

And I think Cloudflare and workers especially is trying to challenge those assumptions.

My product specifically, which is workers unbound, was launched this morning on the blog post.

And we want to extend our new feature out to the general public via a private beta.

And what it is, is really trying to make serverless technology even more powerful, even more performant for any workload that you might have on your worker system to be more performant for long running and complex tasks.

So yeah, that's very exciting, especially I mean, before this, we had more restrictions on the CPU time.

And so I think that will be really exciting to be able to unlock and extend those restrictions so our customers can really start building out their heavy meaty platforms and not be constrained by the CPU time limit.

So that's really exciting. Nancy, why don't you share with us a little bit?

What do you see as the benefits of workers unbound that our customers can look forward to?

And then maybe we can talk a little bit about how is workers unbound a little bit different from what we've had in the past?

For sure. I think the origin of workers unbound honestly came from the customers in the beginning.

As product managers and engineers on the team know, we get a lot of requests from our most loyal clients asking us to extend our CPU limits because they want to run complex algorithms or search functions or things like image processing and resizing, which are things that can take a very long time.

And our people would come up against the 50 millisecond time limit and would need more compute power.

And we've done that kind of on an ad hoc basis just by people reaching out.

But as these requests started coming in more and more, and we start to see them more often, we thought this is probably something that the general market would also want.

And people can benefit from all across the spectrum. So I think this is a really cool thing that really started with the users and with the customers giving us feedback and us really hearing that and bringing something to market that I think people have made it clear that they really want.

So the main difference between workers unbound and workers bundled, which is the legacy offering, is that we've raised the limits from 50 milliseconds to our first target, which is going to be 30 seconds of compute time.

I actually saw a really interesting industry report that came out of Datadog that said that the bulk of requests are really around that 800 millisecond mark.

And so, you know, extending our limits up to 30 seconds really covers that majority of it.

And of course, there's going to be some still at the long tail that will be needing more compute time.

But our eventual target is to get to 15 minutes, which we think can cover the full spectrum of like most typical workloads and most heavy compute burdens.

Yeah, that's great. We've already had some some CSMs pinging, because their customer is already excited to start building out and unlocking those limits.

So that's really exciting. And we can start to see a lot more complex workloads coming out of our customers now.

So that will be great.

Nancy, why don't you share with us a little bit like what is going on behind the scenes that is enabling these benefits for our customers?

Yeah, so I would say one big benefit, which I kind of we kind of touched on a little bit is performance.

And then the second benefit and a big reason to choose workers unbound, honestly, is the cost savings.

And I think our engineers have given so much thought to efficiency and performance in the runtime so much so that we can kind of like reap a lot of cost savings on Cloudflare side and then pass those down to the consumer, which is really nice that we were able to save money on our variable costs and then make sure that our customers get to feel that too.

So I think that one of the main reasons we're able to be so cheap, honestly, is that we've kind of rethought the old paradigm.

We haven't built anything on containers. A lot of our competitors have inherited the kind of container structure from the legacy offerings they've had.

But we've tried to rethink that by building more on our edge and running our code on V8 isolates.

V8 actually comes out of Google Chrome, which is a product I used to work on before I joined Cloudflare at Chrome.

And it's the technology that powers the JavaScript engine in the browser.

And the main benefits come from isolates, which is the core foundation of our infrastructure.

So there's a lot of benefits that you get from running on isolates because it has so much less overhead than running on a virtual machine.

We have much fewer cold start times.

So therefore, we aren't charging you for that time it takes to make the request and get a response.

We keep it warm at all times. Isolates allow you to run hundreds or even thousands of processes all simultaneously, very lightweight.

So we're able to save on that. Gotcha. So it's basically that part of it is with isolates having no cold start time, you're not charged for that time at the CPU.

Is that what it is? Nice. That's very exciting. So can you share a little bit, you started touching on it, and we know it's going to be a lot more compute intensive workloads and complex workloads.

Can we talk a little bit about some target workloads that we have in mind or that we're already starting to see our customers excited about with this new Workers Unbound?

Yeah, I think the world is kind of really open.

And I am honestly excited to see what our customers come up with.

I mean, some of the use cases that we've thought of are like the really big buckets of general ones.

But I know that there are developers out there that have like cool things and new products in mind that we haven't heard of.

I actually just had a customer call the other week with a guy in India who was running a live streaming platform, which I think would be an interesting space to explore with video streaming and image resizing, that kind of thing.

Those functions definitely need long compute times and are fairly CPU intensive.

So I think that'll be really cool.

Other than that, I feel like there's a lot of potential out in the, maybe even machine learning space for people that need to train data or to scrape information.

I think the world is really the developer's oysters and I'm just excited to see what they come up with.

So I don't know, we'll find out. Yeah, yeah, that's exciting.

And especially being able to open up some machine learning, whether it's for inference or some of the more complex machine learning training.

So that'll be interesting to see.

And I think you also mentioned web scraping, which can typically be pretty CPU intensive.

So that could be something. And then something else, I don't know if you mentioned it, but I think it's one that could be applicable is some of the more complex analytics and whatnot that our customers might also be running.

Yeah. Have you heard of any interesting requests or interesting customers that have come up?

We had, I was in a call the other day with some people who, they didn't obviously know yet that we were releasing this, but they were kind of asking us if we were thinking about extending the timeframe for our CPU limits and extending that.

And I was like, well, stay tuned. We'll be talking more about that.

But what they were thinking of is more on the machine learning side and the genomics, which is of course a big space for machine learning and doing some training around that and implementing workers on that side.

So we'll definitely have to circle back with them after this and just make sure they're up to date with all of the announcements that we've been doing and see if we can kind of work together on that.

For sure. I'd love to follow up with these people and see if they would be a good candidate for our beta.

Yeah. Yeah. Yeah. We'll have to make sure that they're in the loop.

So let's talk a little bit more. You mentioned one of the big benefits was the pricing model.

You also mentioned that there's some pricing benefits that we get for not having cold starts and how that you're not charged for that CPU time.

Can you talk a little bit about how this pricing is going to be different?

Are we changing the regular, the option that we've had today?

How much can our customers expect to have been using an existing pricing platform to see changes?

Yeah. This has actually been a really cool like learning exercise for me as someone who was, I came from a consumer background and now I'm just getting my toes, I like dipping my toes into the cloud world.

Over the last couple of weeks, we've run a lot of tests trying to compare what like a comparable light workload would cost on the different providers.

So we ran it on workers, we ran it on AWS, Lambda, we ran it on Azure Functions and GCP.

And our team just set up like the simplest imaginable program, which is a hello world GraphQL server.

And it's just been like kind of mind-blowing to see how all the different cloud providers, like they throw on a lot of charges that you might not expect just for running very simple compute loads.

So for example, AWS has CloudWatch, which you're required to buy if you want to get requests from the Internet.

There's things like API gateway and these things just kind of sneak up on you that you might not know when you're first signing up, like what you're getting into.

But by the time you get your bill at the end of the month, there's like quite a lot more than meets the eye.

So we just ran like the simplest test possible on that hello world, which you can actually pull up on GitHub.

I think it's like linked in the blog if you want to see exactly what we did.

When we found that workers was about 75% cheaper than the biggest competitor being AWS Lambda.

And a part of that is just because of some of the benefits I mentioned of not charging for cold starts and those parts, but also we just don't throw on any hidden fees.

It's like what you see is really what you get.

So you should be paying for only what you use.

Right, right. Yeah, that's pretty exciting. So we don't have the hidden fees of like the API gateway or the DNS request fees, all of those that can kind of be tacked on and included in the final price, which can be costly for our customers.

Right.

And I mean, you can kind of understand where our customers are coming from.

It's kind of a nasty surprise to like get your AWS bill at the end of the month and see line items that you might not know what they are, what they're used for, but there they are.

Right. And that is something, you know, when we came up with our topics for serverless week, we talked about how there are some sort of myths about serverless that we wanted to be able to debunk.

And one of those myths is that especially at large volume, serverless can be quite an expensive option.

And that a lot of the benefits that, you know, could come from serverless of being able to deploy quickly and easily and all of that could come at that added and honest, sometimes unaffordable cost.

Yeah, definitely. I definitely hear you there.

I think that serverless is widely known to be cheaper than on-premise solutions.

And if you were using EC2, you might think that serverless is always cheaper.

But then when you like go down even deeper and compare serverless providers to each other, there's a lot of small nuanced hidden differences that you might not realize.

And even for us, I mean, there are always new surprises when we look at the individual bills that you get.

Right. How GCP bills versus how Azure does it.

It can honestly be pretty confusing to wade through. Yeah, yeah, it can.

So that will be interesting. And I know our original pricing model, we were trying to keep it very simple for simplicity sake.

And this time now we're offering with Workers Unbound an option that really does put us on par.

So you can really see, you know, comparing us to some of these others and see the apples to apples comparison, which I think will be really exciting as well.

Yeah, I think simplicity is really important, not just for our customers, but for our own sanity as like people, you know, product managers and engineers and marketers who work on Cloudflare to be able to easily understand our own products.

And I think our pricing model, which we laid out in the blog, is just as simple as that.

It's just that two by four table and no other surprises, no hidden fees, no and no extras.

Yeah, yeah. And I'm just reading from the press release.

We were talking about our solutions being cost effective and 24% less expensive than Microsoft Azure functions and then 52% less expensive than Google Cloud Functions.

So that will be interesting to have our customers kind of start exploring on those fronts.

Totally. Yeah.

So let's talk a little bit about the timeline. Do we have a timeline? How can our customers get started with the beta?

What can we look for a little bit there? Yeah, the timeline is a good question.

I think we are excited to bring this to market.

But there's a lot of technical challenges to extending your CPU from 50 milliseconds all the way up to 30 minutes.

So we're going to be doing it very slowly, incremental rollout, trying to make sure that we have all the system health in place that we can, you know, that we need to ensure a really smooth and seamless experience.

So for now, we're just starting with a private beta, which people are already starting to sign up for.

And you can find the link on the blog. We'll be doing a lot of internal testing as we migrate more and more of, you know, other Cloudflare products internal to the company onto our edge services.

So we'll be doing testing on two fronts there.

And then there's going to be some changes to the user experience that we're going to need to build out.

The analytics are going to be, of course, different with these new pricing models.

We're going to update our notifications and alert system to reflect those changes.

The profiling is going to look a little bit different to support these two different types of workers.

And we're going to be building all that out during the summer and through the fall and hope to have a launch for GA.

I mean, I don't want to commit to a date, but I think later this year is going to be likely.

That's great. So you're talking about some of the updates and you were mentioning some changes to the profile and to the analytics.

Can you talk a little there, what we might be having in mind or how that might improve the user experience for our customers?

Yeah, we've done actually a ton of user research over the last couple of weeks, trying to understand what it is that, first of all, like how do people diagnose problems if they have issues with their workers and what information do they want to see out of their analytics?

I think overwhelmingly those fall into two buckets. The first type is that, you know, if people have a technical issue, they might've introduced a bug into their code that they need to diagnose.

So there are some metrics that are really important for that being like requests or CPU consumption.

And then there's another class of questions that people have that's usually related to their bill.

So now as we've changed billing model to include pieces like, you know, it has CPU requests, duration, and as well as data transfer, we're going to have to be able to reflect that.

You know, if people have questions about their bill, we want to show the analytics that help them understand what's happening and how they're getting that total at the end of the month, and be really like transparent about that.

So we're going to be redoing our notifications and our alerts to match those numbers that we want to show.

Nice. So as you said, we have the new workers unbound signup form ready and available and it's on our blog.

And also, even if you Google workers unbound, it's one of the top ones in Google last I checked as of this morning, workers unbound data.

So let's talk. So one thing that, you know, you and I were talking about is we're getting a big influx in our inboxes of people who've already been signing up, which is really exciting to see that interest in the market.

Just within minutes of our blog and press release going out, we saw a huge, huge influx of excited users.

So what can the timeline be that somebody might expect?

We're not able to take every single person who's tried to sign up for the beta, but when can people start hearing back from some of this?

Yeah, I would love to sign up as many people as we can.

I mean, I think that it's a cool feature for our users, but honestly, we learn a lot from talking to customers and getting them onto our beta products, just for the sheer user feedback.

And if you have any bugs, we'd love to hear about them. So we'll onboard, and I want to onboard as many people as we can.

I think we have a lot of time set aside this week to like read through everyone's requests and see what people have written in their use cases.

And we'll be reaching out in the next couple of weeks to a select number of use cases.

We might want to make it really clear that it's a beta feature.

It was built by people. There's probably going to be bugs. So people shouldn't be putting their, you know, like business critical logic on there.

But if you have some interesting, you know, experiments or any representative workloads that you kind of just want to play with, those would be good candidates.

But we don't want people migrating any essential services onto our beta.

Not yet. Not yet, anyways. Yeah, yeah. Nice. So are there any particular things as we're going through over the next few weeks and looking through all of these signup forms?

Is there anything we're really going to be looking at that will be particularly compelling for us to want to select those users?

Yeah, I think that we're definitely looking for all kinds of use cases.

So we want to get like a smattering of people from all different parts of, you know, the industry.

So people who might have image or hosting or even video would be interesting candidates.

I would love to talk to people who might have complex processing questions.

You kind of mentioned there's a genomics use case. So I would love to see people from different industries, different sizes.

I think that this could be a useful tool for researchers, I think, at like a university or research level could be useful all the way from small companies to, you know, larger or medium sized companies.

So I think we'll do our best to try to represent everyone, you know, different regions, industries and sizes.

Cool. Yeah. So by sizes, you mean like size of the company?

Or do you mean size of the like the CPU time limit? Oh, I kind of meant sizes of organizations, but sizes of CPU limit is a good one too.

We'll be looking for just a range basically.

Yeah, yeah, yeah, that makes a lot of sense.

Very cool. So you and then you mentioned by it might take several more months until we can get into the GA for this.

Yeah, I think the turnaround time for this is going to be a work in progress.

I mean, there's a lot of changes yet to make until launching the new feature.

Not just the billing changes, but also the technical infrastructure behind it, as well as the pieces that you would see reflected in the UI, things like analytics and alerts.

So we are definitely sprinting as fast as we can, but it is going to be a phased rollout.

Gotcha, gotcha.

So one thing I just want to make clear, and I we did mention in the beginning, but I want to make it clear because I was getting a few questions on this.

So we're the workers unbound is going to be another option that our customers can use for for unlocking those CPU limits.

But what we're changing the name of what we've generally been using with our customers to now it'll be called workers bundled.

So if people start seeing workers bundled out there and they're getting confused, it's basically what we've had in the past, but now it's workers bundled so that we have a different name to compare it to with with workers unbound.

Yeah, definitely. And I think it was really important for us to preserve that experience.

I mean, we have lots of happy customers right now who are using the current workers product for things like they might be a customer for Cloudflare for load balancer, and they might need to customize some of those rules.

For them, people who aren't really doing application development powered by workers, but are rather using it as a supplemental, you could even say middleware tool to other products.

I don't think it makes sense and they might not have a use case or like a severe need for long running workers.

We want to keep those people moving along with the status quo if that's working for them.

Great and we'll continue to offer those functionalities as part of workers bundled.

Nice. Yeah, I'm also getting a few questions coming in live from the chat that maybe we can touch on.

So we have one question coming in asking what is the pro and con of using Cloudflare Workers over Cloud Run?

I'm not that familiar with that.

Do you have any touch on your side, Jen? Nope, I don't have too much of a hunch on our side.

This is a good one for us maybe to put into our maybe FAQs or maybe if the person who submitted the question can give us a little more background on what they're what they're meaning and then we can follow up and maybe on our Twitter or something we can get a response.

Yeah. Do you have any more questions?

Let's see. I have a long one coming in. Let me just give it a quick read through.

Let's see.

Well, maybe we can hold off on a few of these and get back to them because there are a couple more things.

We only have a few more minutes. I want to make sure we get to and then how about if we still need if we have a couple extra minutes we can look back at some of these other questions.

Sounds good. So one thing I definitely want to make sure that we get to is what are some of the metrics of success that we are looking for here in Workers Unbound?

What will be the goal and the outcome for to get this to GA and then once we're having customers in GA what are we excited to be looking for?

Yeah, I think that success for this beta is number one priority is definitely system health.

It's going to put a lot of strain on our run times.

We'll be running these workers that are like an order of magnitude bigger and longer than ones that we have in the past.

So I think testing slowly and incrementally and making sure that our system can handle the strain of the increased workload is the number one priority for us and we definitely think that we have the capacity to do so.

It's just a matter of getting there and testing incrementally.

I think another big piece of the beta is just getting to know our customers better, hearing their feedback.

You know, if they have any bugs, if they're having issues, we want to know about it and there's a direct line to me in my inbox if any of our customers have issues with Unbound.

So those are the two things I'm looking for.

One, that our system can scale with a lot of people running long-running workers and two, that we hear what they're saying and hear the requests that they have.

And we can start implementing those on our side for once it does go into GA.

Exactly. Nice. So we do have another question coming in. The person is wondering if with Workers Unbound, are we also bumping up the memory limit and or the script limit?

Those will be unchanged for now, but we hear your requests and we know that they're top of mind.

All right. And he was also asking for concurrent request limit.

Are we going to be making any changes there? No changes on that front, but also noted, duly noted.

Okay, great. Yeah, are we going to be keeping any like FAQs or anything that our customers or users interested can kind of go to and look at some of these questions that we might be getting repeatedly?

I think that's a really good idea.

I think we're starting to see a lot of themes just even from the TechCrunch article and what we've seen on Hacker News that a lot of people have some of the same requests as others.

So maybe we can find a centralized place to address all of those and put those out there.

Yeah. Yeah. And to those listening, we'll try to be using some of our social media and our Twitter and some of our different things as well to be talking about these new updates as they're coming and people can ask us questions there and that's always a good place for us to talk about it.

Are there anything else? I know we haven't planned like upcoming blogs or things like that, but I'm thinking now as this kind of starts developing, it would be great, Nancy, if we come up with ways to keep people who are interested kind of aware of where our timeline is and what our status is as we are moving forward.

Definitely. I think that we will definitely be in touch with people about the upcoming launch.

And I also think it'd be really interesting just to, as this product evolves from bumping up the limit from 15 milliseconds up to 30 seconds, up to minutes and 15 minutes for some of our engineers to be writing technical updates on like how we've come to these limits and how we've overcome them.

So technical blogs will be coming. Right. Yeah. Well, this is really exciting.

I'm, I can't wait to see what our customers are able to start building here and how we're able to keep integrating some of these requests and what we find out during the beta.

So we only have two more minutes. I just wanted to maybe talk for a second about what people have to be looking forward to coming up this week.

So definitely everyone tuning in today, we want you to stay tuned on our blog and stay tuned on our Twitter accounts and whatnot.

We have some pretty exciting updates.

Obviously we announced Workers Unbound today. We have some things coming up on languages and what we're going to be enabling and supporting there for workers.

We'll have some discussions around security and what were some exciting things that we're doing there on the security front for workers.

We're going to talk about the developer experience and new tools that we're releasing to really make the developer experience seamless and smooth for, so we can help developers on every step of the process from debugging to developing their product, diagnosing problems, all of that.

So those are a few things we're going to be talking about.

We'll also be discussing cold starts this week. That's a big thing in with our customers and really keeping those, as you mentioned, very limited.

So we're going to have some really exciting announcements coming out this week.

So everybody should be sure to stay tuned in and we look forward to sharing more information about Workers Unbound in particular and then some new and exciting announcements this week.

So I can't wait to see what some of our customers can come up with all of this.

I know it's actually been really fun. We read every single comment on Hacker News.

We see all the comments on Reddit. We read everything. So if you want to tweet at us, we'll be there.

Yeah, great. All right. Well, thanks, Nancy.

This has been really useful. I hope that our community out there has enjoyed it and hopefully has learned something.

Appreciate your time. Great.

Thanks, Jen. Bye.