Cloudflare TV

Latest from Product and Engineering

Presented by Jen Taylor, Usman Muzaffar
Originally aired on 

Join Cloudflare's Head of Product, Jen Taylor and Head of Engineering, Usman Muzaffar, for a quick recap of everything that shipped in the last week. Covers both new features and enhancements on Cloudflare products and the technology under the hood.


Transcript (Beta)

Hi, I'm Jen Taylor, Chief Product Officer at Cloudflare and we are here with you again for latest from Product and Engineering.

Usman. Hi everyone, nice to join you again.

Jen, my name is Usman Muzaffar, Cloudflare Head of Engineering and we have some special guests we brought along today.

Ali and Rita, why don't you take a second to introduce yourselves.

Sounds good. So I'm Ali, I am the Director of Product for Workers and I am here today to talk all about Serverless Week and all the great stuff we did this week.

Rita, to you. Hi, I'm Rita, I also work on Product at Workers and I'm also very excited about all the things that we announced this week on Serverless Week.

So this is why we wanted these guys here because if you've been following our blog, if you've been watching anything we're doing with Cloudflare this week, if you've been watching Cloudflare Free TV, it's all serverless all the time.

So how can we talk about the latest from Product and Engineering without starting by talking about Serverless Week?

So can one of you just give a brief overview like what is Serverless Week and why did we do it?

Yeah, so we viewed Serverless Week as our opportunity to tell our serverless vision, right, and put our stake in the ground of like how we at Cloudflare view serverless and how we're combating some of the common problems with like the classic serverless implementation.

And this week really gave us a chance to do that. It also gave us a chance also to like talk about all the great things that the team has been working on.

We view it as a like a fun excuse to do that, you know, like building these moments where we can celebrate all of the hard work we've been building up to.

I have a question, what is serverless? Yeah, that was the question I was going to ask.

Well, I wanted to ask it. If anything, Cloudflare is a gigantic set of computers.

Those computers are servers, I'm pretty sure. So like what does it mean to be serverless?

Yeah, so in my mind, serverless isn't is less about implementation of whether they're on servers or not.

Of course, the software is running on servers, but more about what overhead the developer has to have in their mind when building software.

Like do they have to continue, like do they have to worry about servers or do they not?

Do they have to worry about the maintenance and operational tasks associated with managing servers or do they not?

So serverless is more about what burden you place on the developer working on the platform.

And the serverless movement is about removing all of that burden of managing servers with the development process.

So like you just get to focus on the business value of your application and not the management tasks that you were doing in the past that maybe took up more than 50% of your time, right?

Like you were spending so much time on boilerplate software and that's really what's been a lot of this movement, generated a lot of this movement.

Read anything you want to add to that?

Oh, sorry. Yeah, go ahead. Oh, I mean, you said it was mine. We have so many servers and we're really good at running servers, but I don't think that every developer out there should be.

My running joke has been that if this is the correct computing, like next computing paradigm in 30 years, people should be really confused by that word because they shouldn't know what the word server itself means.

They'll be like, what? What? I like that. I like that. What are we? What are we less?

Yeah. So, but hold on a second. So, but like how does workers just like get super tactical?

Like how does workers fit with serverless? What is workers? Yeah.

So workers is a way to deploy functions across the globe really, really quickly and close to your users.

So you get to kind of build your business value, build like functions that build application features on top of this platform that takes a lot of the burden out of scaling, performance optimization, and then ultimately managing like the server farm that you might need to scale.

So we announced a ton of stuff.

You guys have been doing some phenomenal work on the workers team.

And like, just first of all, congratulations. Like by the time you get to Friday of one of these weeks, like the fact that both of you are still standing and coherent, like is phenomenal.

So congratulations, both on the launches and still being here to tell the story, but we, you guys launched and announced a bunch of stuff.

What each of you tell me, what was your favorite thing we launched this week and why?

Rita, I'll have you go first on this one. All right. Mine is easy.

So I work specifically on developer productivity, which means I get to have the pleasure of enabling our developers as much as possible.

And so one of the things actually that we talked about to kick off this week was that, you know, speed is really important to how your application runs and security is really important and scalability is really important, but you really can't get to any of those important things unless it's easy to get there.

And so my job is to make it as easy as possible at every step along the way.

And the thing that I'm by far the most excited about is the release of Wrangler Dev on the Edge, which is it's an edge-based development environment.

So typically as a developer, your process will be, okay, let me write some code, see if it runs correctly, then you realize you're missing a parenthesis somewhere or something, you fix that, then you fix it again and again and again, and you need this really tight iterative loop.

And so this week we enabled our developers to have that iterative loop through allowing them to develop directly against their edge, which runs really, really close to them.

So it also means that you don't have to install all the stuff that it takes for us to run Cloudflare.

They just get to run against Cloudflare. So it's almost like you took the value proposition of serverless and workers and brought it not just to run in the application, but actually to how the application itself is created.

So all that productivity and performance and efficiency you get in a serverless app, you now get that same performance and scalability in the development itself.

You're no longer pushing, fixing, pushing, fixing.

It's much more productive. And one of the things that's really been hard about local development in general is mocking production-like behavior.

So for example, if I'm writing a worker and using KV, if I'm doing local development, I have to mock the KV API.

I'm not getting true responses.

I might mock a response from KV so that my test can run locally. But in this edge development, you actually have access to those things.

You have access to the cache API.

You have access to KV, things that you have in the production system in order to really get production -like behavior, making the tests actually more effective.

So in a lot of ways, you keep the speed, you keep that type loop, but then the closeness to production, the production-like behavior got better with moving to the edge.

Well, and also as you release new capabilities on the platform, you don't have to wait for the APIs to become available or the development environment to catch up because it's just, it's all right there.

It's so important to have a dev environment that matches what you're actually going to deploy against.

And there's analogies of this all over the place.

The whole world, we all use mobile phones. The software on our mobile phones was not developed on a phone.

It was developed on a computer, but you can bet the engineer who wrote that software had a device plugged into their laptop or their desktop and tried it right there.

And yet with cloud, you don't have that unless you build something like what you all just built and delivered to our customers.

Because otherwise, the first time you get to try it is actually shipping it.

That's the last thing you want, which is, I can't see how this is going to perform until it's actually in the hands of my customers.

So giving them basically a slice of the real Cloudflare cloud, but in a safe and controlled place where they can make as many mistakes as they want, and it's not going to mess up anything.

That's not going to mess up us, is a big deal.

And it's going to unlock a whole new level of productivity.

Well done. Yeah. I mean, the oldest excuse on the book, right, is works on my machine.

Works on mine. Yeah. I don't know what you guys are talking about.

Works on mine. Yeah. That excuse is out the window now. There is no works on mine.

Did it work in the real environment? If it did, that's all right. I think that is also a cool aspect of it, which is it does also unify different developers working on the same project where they don't have different dependencies installed or different versions all of a sudden.

Slightly different in my environment.

That's just enough for me to bug to sneak in, right? Well, and they can just see it all right there, right?

In terms of what the other person is writing. That kind of, again, the performance and the efficiency of the collaboration.

Yeah. Oh, cool.

Okay. The performance is... Sorry. One more tidbit. It's your favorite thing.

I don't want one favorite, okay? Oh, the other really cool...

Actually, the greatest compliment we received about this is people not realizing that it's actually not running locally.

And so to your point of the...

I mean, the reason that generally we have local development environments is because that's the only way to get that fast feedback loop.

So the fact that people don't even realize that it actually connects to the edge until they go offline, and then they're like, wait a second, is really, really a compliment.

That's super cool. Yeah. There was a ticket made that was like, oh, this doesn't work when I'm offline.

That's weird because it's running locally on my machine.

And we're like, no, it's not. It's running on the edge. So it makes sense that it's an online-only thing.

But yeah. And Jen, to your question about what was my favorite, on Monday, we launched the private beta for Workers Unbound.

And it really extends the execution time and the types of use cases that are possible on the edge now.

I think before people had to make a trade-off between running fast and good user experience functions on the edge versus getting centralized resources in a centralized cloud, like unlimited access to resources on a centralized cloud.

And I think with this, you don't actually have to make that compromise anymore.

You get unprecedented access to resources on the edge. You're seeing the same time limits that you're seeing in Lambda on Workers.

And I think before that big launch, people might associate the edge with less resources, trivial tasks.

And I think this really helps us position workers for what it is, which is a general purpose offering.

I think Rita and I, one of the first conversations I had when I joined with her was about how we were frustrated that a lot of really use cases that Lambda touts as good use cases for serverless are really good for workers too.

And this gives us an opportunity to tell that to the world. Are there a couple of use cases that you're super excited to see people embrace with Unbound?

Well, one of the ones that we're really playing with now actively is machine learning and inferencing on the edge.

I'm really excited about where that goes. Wait, where did you say?

Frensing? Inferencing. Inferencing. Okay. Okay. It's like frensing.

I was like, I'll get you some of that. I'll look it up later. I'm going to pretend like admit that I don't know what that is.

Frensing, sure. We do that too. And really, so is Unbound then kind of a critical leap forward in using workers for that because of the type of performance and kind of capacity you need?

I mean, is it basically just putting, you need more muscle basically to be able to do that than we've been able to have previously?

Right. So you need more machine power. You need more computation resources.

And this gives us the ability to expand to computationally heavy tasks where we weren't playing in before.

And I'm really excited about that.

I think another way to look at it is it's one less decision that you had to make because previously you would have to think, okay, is this suitable for the edge or is it something that I need to run centrally?

Whereas now kind of across the board, whatever you're trying to build, you start it all the exact same way.

This is a big deal because at one point, on a competitor of serverless, a much older technology, I had a startup before I joined Cloudflare and we were on this and it used to have a 30 second limit and it was really, really constraining.

It affected how we designed stuff. And all I wanted was just a little bit more time because that would give me so much more flexibility and it wound up forcing us into all kinds of weird contortions because we didn't have that extra time.

And going all the way to whatever our 15 minutes or something like this, just extraordinary.

That gives you so much freedom. Should really call it workers unleashed.

All kinds of things are going to come out of this. Yeah. I mean, I think one of the interesting things about that too is a part of it definitely is enabling people to build things that they previously couldn't, but another part of it is actually letting people conceive things that they built that maybe they even was possible previously under the 50 millisecond limit.

They just didn't think it was because it's actually kind of hard to wrap your mind around CPU time versus like wall clock time, right?

And so when you're talking about 30 seconds, Guzman, what you mean is if I'm calling another server and at times it's the other server is slow.

It's not my fault. It takes 45 seconds. I'm just waiting on it. We don't really care about that time.

Like your worker itself might still take all of 10 milliseconds to run.

That's right. So yeah. So even 50 milliseconds was more generous than it sounded, but the unbound limit is a King's ransom.

It's really an embarrassment.

Yeah. Yeah. And to Rita's point, we have users that are consistently at like two milliseconds CPU time, five milliseconds, and even then they want a platform that they feel like is going to grow with them too.

Even if they're well below those limits, this really helps us make sure that they feel comfortable betting their entire application on the platform as well.

So you mentioned it's a private beta.

How do you sign up for it? Yeah. So there is a form. If you go to, there's a call to action to sign up for the private beta.

Sign up there.

We've had over 700 signups so far. We're really excited. We're going to slowly roll this out though as we onboard people onto the beta.

But definitely sign up, tell us your use case, and then we'll reach out if it's a good fit.

That's awesome. Now, so I know one of the other things that you guys talked about was improved language support.

One of the things that Usman and I riffed on last week was the fact that one of the things we've been doing in the dashboard is actually localizing the Cloudflare dashboard as a way to help people who may not necessarily speak English be able to make better use of Cloudflare.

I'm kind of curious about the language changes you guys are making and sort of what was the genesis behind some of that?

Yeah. So language syntax matters to developers, right?

So writing in your preferred language is something that people feel passionately about.

With workers language support or workers language features this week, we talk about not just WASM supported languages, but also compiling to JavaScript.

So Python compiled to JavaScript, Kotlin compiled to JavaScript so that people can write in syntax in the syntax that they want, but ultimately get to run on the platform by transpiling to JavaScript.

Yeah, and you know, I think the multiple languages is, it's a really big deal because it's not just about your preference.

It's, you might have dependencies on libraries or technology that is in another language and you want to be able to adopt that.

You might have to integrate with other things.

You might have other components of your application that are like, you might have developers at your shop who are experts in that technology and you want to leverage it.

And so it's way more than a preference. It's the idea that you can actually use the right technology for the problem that you're trying to solve.

And that is a very powerful thing. And the fact that it was WASM before was already pretty rich.

So just to remind our audience, WASM WebAssembly is a, it sits on top of JavaScript.

So conceptually it's backward. Normally you think of assembly as being sort of the lowest level of a computer and that the higher level languages like JavaScript, which a lot of us have been using for decades to program web browsers, sit way above that.

And one of the rather fascinating developments in our industry in the last five, 10 years was, wait a minute, we could take a restricted subset of JavaScript, optimize it so much that we can basically treat it like an assembly language, like a destination where you can run.

And that opened up to a whole new world of bewildering demos on the Internet where people are running all of Windows 95 and all of Doom.

And all of a sudden, an entire nineties computing showed up in web browser tabs.

And a lot of people were like, wait, how is this done?

And it's basically because computers are built on abstractions.

And if you can make that layer show up, the rest of the software above it has no difference.

And so I think it's great that anything that can target JavaScript, anything that can target WebAssembly, it's all fair game.

And so that means all of these programming languages, new ones that everyone's all excited about, the reasons and the darts and then the old classic ones, the Pythons and C even are all fair game.

It just, it really legitimizes how powerful workers is and how it's really going to be a force to reckon with as a place where engineers can write good code.

My question was around the warm start, cold start, no start problem.

So one of you, one of you take a shot at what, tell us what that's all about.

Well, I'll start with Cloudflare Workers.

One of the, one of its biggest benefits, I would say from the very beginning is performance.

And we've always, we've never really had problems with cold starts to begin with.

That's always been fun. Let's make sure our audience knows what we're talking about.

What's a cold start? Right, right.

So, you know, when your computer wakes up and it takes a hot second before you can actually open up Google Chrome, well, virtual machines have to do that as well.

So every time you go visit an application, even if it's maybe a function and you don't see the full container that gets spun up in the background, that is what actually happens.

It has to pull in an entire language runtime and start it up and then pull in your code, and only then it can start running.

And so the end users, the experience for them is they're sitting there for sometimes a good 30 seconds, twiddling their thumbs, basically, until the function gets warm.

And then on consequent requests, it can be fast again, unless you're introducing new concurrencies, then the process starts all over again.

Yep. Right. And this happens as you need to scale and as you kind of like are, have low throughput functions that go cold after some period of time, but also definitely when you're scaling.

So as you like peak in workload and more processes need to spin up in order to handle that workload, that's when these cold starts in traditional systems happen.

And that could, it's not just like the developer twiddling your thumbs, it's your users, right?

That's your user experience. Someone is sitting there waiting for a response on the other side of that request and happened to be unlucky.

Of course, I can tell you when I see an ad online and it's for something I want to buy, if it takes more than 30 seconds, I'm going to go, that was probably a bad idea.

Yeah. Yeah, exactly.

Just enough time for your brain to kick in and go, yeah, you could probably spit this.

This shouldn't be something you should spit out. This business of cold start, hot start.

This is associated with one of my worst nightmares in my career.

I was trying to demo our product to an investor, I think, or it was an important customer.

And right there, because we hadn't hit it, we had all, in our sales demo notes, it was hit the product first, otherwise that VM would go cold, but didn't have enough traffic for it to, and sure enough, right there in front of everybody, that six second delay, and it was just endless.

It was endless. It felt like an eternity.

And it was just like, it was supposed to be this great responsive application and it's just so sluggish.

And of course, it's fine after that, but it's unacceptable.

We have to come up with a world where that, so what did we ship last week that's so exciting for this week?

Well, so let's, for a second, talk about why Cloudflare is positioned in the first place to have pretty solvable cold starts, right?

And the reason for that is we don't spin up an entire container every single time.

We spin up something much, much more lightweight, which is called an isolate.

And so to put it in the context of what I was saying earlier about pulling in like the entire language runtime, we already have it.

And so all we have to do is pull in your code. And then the second the request hits, we're ready to start running it, but we still have to pull in your code earlier, right?

And so what we did this week was we realized, actually, we know when your code is going to get called just before that happens.

And we actually know that for the first time at the time of the TLS handshake, since that's the very first thing that happens when a request comes in.

So when we get the SNI, which tells us the host name, we can go, yep, a worker is going to run there.

Let's pull up that script so that by the time the request actually arrives there, it can just run with a zero millisecond cold start.

That's amazing. And you guys wrote such a fantastic blog on this, the team that is on blog.

And that picture just crystallizes it.

There's one part of the system that's handling the security. There's the other part that handles the worker.

And the security one can basically whisper to the runtime, it's coming, set up right now.

Because two milliseconds later, that request is coming through because I'm dealing with the security side, but this is going to come through.

And so be ready. And there it is, zero millisecond start.

It's fantastic. And if we're starting up a whole container, and it took the six miserable seconds that you're waiting during your demo, a TLS handshake wouldn't cut it, right?

Like what's two milliseconds from that. That's fantastic.

What I think this is, again, kind of the cool thing about building Circless on Cloudflare is because we have this globally distributed network, but because we have this very integrated system, where TLS is so intimately connected with all the other parts of the network, that we can handle these kinds of handshakes, right?

It's unique in the way that we built it. It's really cool. Yeah, it's definitely an idea that started with the workers team, but was a group effort, right?

The protocols team really got this over the line for us, and really, really excited with how that turned out on our side.

That's awesome. Jen, what else did we ship this week?

Oh, man. Serverless grabbed the mic, which is awesome, but the other teams at Cloudflare were shipping all kinds of stuff too.

Yeah. One of the things I'm super excited about is the work that we did for IP lists.

Lists. What's an IP list?

The way I think about an IP list, it's kind of like it's the list of all of the IP addresses that you either want to specifically allow or block from your application.

It's a critical resource that is sort of built up over time. The analogy I use is when I'm heading to the grocery store, I take a list with me, and I'm on my way to the grocery store, and then my phone rings.

My spouse says, I'm making lasagna.

Can you please get me these eight things to make lasagna? I'm like, okay, great.

Go. I'll get groceries. Same thing happens next week. I'm on my way to the store.

I get the call. I'm going to make the lasagna again, and again, we have to make that list.

What IP lists actually enable you to do is to create a predefined reusable group of lists that you can apply again, and again, and again across different parts of your zones, across different zones, so that you have a lot of flexibility and reusability in this basic asset that you have.

The nice thing is I can use an IP list in a firewall rule that I've written.

I can modify the list outside of the rule, so I can do it dynamically without having to kind of crack open the rules and kind of start all over again.

Yeah, and that's so important, right?

It's not like you couldn't do this before. In the same way, you can keep texting your spouse to say, one more thing, one more thing, please get this.

No, I actually already have that, so I don't need that.

You're constantly changing or modifying this asset where it is used, and really the insight is, wait, I want to manage the list separately from where it's being used.

There's really an information architecture angle to this whole problem, which is outside of where customers define rules, how can we make it easy to define lists?

As soon as you do that, you're like, yeah, I want a whole bunch of list management features.

I want to be able to export them.

I should be able to import them from a spreadsheet. I want to be able to specify them in flexible ways.

I want to give them a name, a description, and then reference them where we're making, so whatever the rule is that says, anything that is coming from the following addresses, please block.

Anything that comes from the following addresses, please allow.

Those rules can be very simple and clear because they're referencing a list as if it's a unit rather than having to spell out every single term, and you can imagine just speaking as the engineering person here, that required a fair amount of work because we have to get that asset and then make sure it's there at the time that the edge is running, and we don't do anything less than Cloudflare scale.

For starters, we're allowing hundreds, I think it's 1 ,000 items per list, but it'll only get bigger from there because we know that we want to give customers as much flexibility as possible, and they're not going to type those list numbers in by hand.

They're going to write programs that are APIs that manage those list numbers, and 1,000 will seem like a small number in the future, and so that's a big part of what made this an interesting technical challenge and an interesting sort of user experience challenge.

Well, and I also have to say I really appreciate that the work the team did, not only to create the capability, but also to create that interface to make it easy to manage that list, to add, to export, because I think that it's, you think about it, it's a huge amount of information, and to the fact that you make it seamlessly and pour it in and out, incredibly powerful.

And it's actually, it's almost by coincidence, but it ties in with another feature we shipped last week, which is Spectrum Port Ranges, and so just for a second here, Spectrum is our application that allows you to protect anything, not just a website, all the other applications that are, the term Spectrum comes from the idea that it's the entire spectrum of applications that are on the Internet, and one of the things that distinguishes Spectrum from a normal application, from just web applications, is what port they are, and a port is a completely virtual construct, something software engineers invented when they decided how two computers should talk to each other.

It was sort of like, yeah, they could, we could imagine a cable between them, but then someone, you know, it's all software, we could imagine 65,536 cables between them, and so they basically created a world where you can create a cable between any two computers as easy as you want, and they sort of gave each cable a number, and like, so let's just by convention, 80 is where web traffic goes over, and 443 is where secure web traffic goes over, and some of these ports have famous numbers, but some applications use more than one port at the same time, imagine more than one cable at the same time, so for Cloudflare to know about that, it's very similar to that firewall problem, you've got to spell it out to us, you have to give us, you have to write them out in configuration, and if it's like, well, I'd like you to be able to use ports 2000 through 3000, what are we going to make our customers do, right, a thousand entries, one manually at a time, you can imagine how popular that was, so the fix here was, let's make Spectrum allow you to specify port ranges, and so the UI is so simple, it's just where you would normally type in 2000, you can now type in 2000-3000, and bangs, Spectrum is aware of this, it'll provision the ports on the right side, it can watch the traffic across the range, recognizes that's all one application, this is super important for a big class of applications that don't just use one virtual cable, that they use a whole bunch at the same time, that is, we are at time, does anyone have anything else they'd like to say about how much great stuff we shipped last week?

I have a quick question, can you use IP lists with Spectrum? That's a, I'll have to look that up, Rita, Rita put me on the spot, I think, sorry, I mean, conceptually, you can, IP lists live outside of the system, so it's really just a matter of being able to access it, but if it can't, we should do that, and you should write a PRD for that, I don't think I should, but I'll find the PM who I think should, well, and I think, you know, first of all, I just want to say thank you to Ali and Rita for joining us for one of these wild and rambunctious latest from product and engineering conversations, it's one of my favorite chats of the week, it's a good opportunity for us to reflect on everything we've delivered, as usual, we just got to a small slice of it, but I really appreciate everything you guys shared, and Usman, it's always a pleasure to chat with you, and we look forward to seeing you all next week.

Absolutely, thanks, Jen, thanks, Rita, thanks, Ali, thanks everyone for watching, we'll see you next week.