Cloudflare TV

Cloudflare’s Global Cloud Network & Developer Platform

Presented by Usman Muzaffar
Originally aired on 

Best of: Cloudflare Connect NYC - 2019

Usman Muzaffar is Head of Engineering for Cloudflare, where he leads the development team that is making the internet safer and faster for 20 million internet properties. Before Cloudflare, Usman was co-founder and CTO of Selligy (acquired by Veeva Systems).

In this keynote session, he walks us through the scale and capabilities of Cloudflare's global network — including developing modern applications at the edge.

English
Cloudflare Connect

Transcript (Beta)

This episode is presented by Usman Muzaffar This episode is presented by Usman Muzaffar This episode is presented by Usman Muzaffar This episode is presented by Usman Muzaffar This episode is presented by Usman Muzaffar This episode is presented by Usman Muzaffar This episode is presented by Usman Muzaffar This episode is presented by Usman Muzaffar This episode is presented by Usman Muzaffar This episode is presented by Usman Muzaffar This episode is presented by Usman Muzaffar This episode is presented by Usman Muzaffar This episode is presented by Usman Muzaffar This episode is presented by Usman Muzaffar This episode is presented by Usman Muzaffar This episode is presented by Usman Muzaffar This episode is presented by Usman Muzaffar This episode is presented by Usman Muzaffar This episode is presented by Usman Muzaffar This episode is presented by Usman Muzaffar This episode is presented by Usman Muzaffar This episode is presented by Usman Muzaffar Flair in 2011 when Michelle and Matthew were just a couple dozen people, and he was telling me, he's like, you've got to join this company, it's doing the coolest stuff, it's going to patch the Internet.

And I said, what the hell does that mean?

And what he meant was it was actually fixing all these interesting problems at the network layer, at that graph that Michelle showed.

And what's awesome about Cloudflare is that that mission is unchanged.

It is exactly the same. We are helping to build a better Internet. And the word there that I would underline is help.

Because we're not building a new Internet.

That's not the idea. We're not trying to reinvent the TCP stack. We're not trying to reinvent clients and servers.

We're trying to make sure that the Internet that was conceived in those RFCs in 1977 meets its full potential and can actually deliver what all of us are trying to get it to do today.

And that means attacking all of these interesting problems in security performance and reliability, and that's what Cloudflare is really all about.

To do that, if you go around and you're trying to say with a straight face that I want to help build a better Internet, you better have something that actually operates at Internet scale.

Like Michelle said, kind of a hard sentence to pull off in 2011 when you're three data centers and a couple dozen employees.

Easier to do now for me. Because I can stand in front of a slide that says we're in 194 cities, 20 million Internet properties.

The slide that's not up here, the engineering one that I keep my eye on is 15 million requests per second.

That's how many times something is hitting.

15 million per second is how many hits are going through the Cloudflare network.

Two million of those on DNS. 99% of the world's population within 100 milliseconds.

And we're just going to keep pushing both of those numbers, right? It's going to become 99.9 within 50 milliseconds.

And that's how our infrastructure team actually measures their job.

How far away are Homo sapiens from Edge networks, right?

That's their metric. That tells you, okay, if you've got a team that's working on a problem that's defined that way, yeah, you're helping build a better Internet.

So not done, though. A lot of interesting challenges.

Steven, I'll just touch on here. South America, tons of people, not a lot of data centers.

That's clearly something we have to work on. Sub-Saharan Africa, even harder problem.

And then India, harder than both of those because you have not only a lot of people, you have these giant megapolises, these giant cities where you can't just put one pop.

It's not that simple. You've got to actually be able to think through how the traffic is going to flow in the city.

These are all challenges that we're working on.

So let's go back. Let's rewind back to 2010.

Let's imagine that you're Matthew Prince and a couple of other engineers, and we're going to say, all right, we're going to patch the Internet.

We're going to patch the Internet.

What piece of technology would you reach for first?

What are you going to do the first day? You got your cup of coffee. You open up your IDE.

What are we doing? What's the first thing we'd build? I'll tell you what I would have reached for.

I would have reached for a web server, maybe Apache, some ModSec rules, and I would throw it in there.

I need some business logic. The language of the day was PHP, and that's exactly how Cloudflare was built on day one.

It was just open source components glued together, a little bit of logic, a little bit of firewall rules.

And it didn't scale. How could it scale? We know that doesn't scale.

So you switch to NGINX, and then you say, okay, now we can handle 10,000 requests a second.

That's good. A per second. We still need a much more flexible language, which is actually going to let us run code, which is going to let us do those firewalls at the edge.

So you translate the PHP, you ditch PHP, and we move to LuaJIT.

So it's the Lua programming language just in time compiled, injected into NGINX.

What does this give us? This now gives us the beginnings of a platform.

It gives us a way to ship behavior changes to the edge at scale.

And meanwhile, the network is getting bigger. So it starts off with three data centers, then it's 20, then it's 40, then it's 60.

And the number of Cloudflare employees is not growing to keep pace with that.

It's growing because we have an SRE team that needs to be able to leverage this growth.

So we have to have now tools which will say, from the time an engineer makes a change, how fast can that change make its way out to the edge?

Right? So now we've got a new problem, which is configuration and code needs to be able to be shipped to 194 data centers in two seconds.

And that's actually how it works today. In two seconds after we push the button, it can go live at 194 data centers.

And now I've enabled our engineering teams to work on performance problems, security problems, reliability problems at scale.

And they don't have to worry about each other. They are able to think through, if I can get it working on here, I can try it in a test colo.

And from there, as long as the architecture is homogenous, I can ship it at scale to the edge.

And then something else happened. A customer said, hey, we kind of want to make some changes up there, too.

You know, when you get a request coming in from country X, can you send it to a different origin instead?

We want you to write a little bit of code that does that.

So we said, yeah, it's kind of a snowflake, but OK, we can do it.

So we put some code into the edge that does that.

And then they ask for a different request. Hey, if we get a cookie, do you think you can 301 to a different location?

Because we want to change the behavior based on that kind of traffic.

And again, we can do it, but this isn't optimal.

And we realize what we really need is the ability for our customers to be able to develop the same way that we have been able to develop.

We need to give them a programming language so they can change the request behavior and they can modify what happens at the edge with the same guarantees and the same fluidity that we have.

And now that's a real platform. It's a platform when you have something that both you and your customers can build on top of.

And that's what we mean by the start of serverless.

And the start is really the operative word here.

We simply don't know. I don't think anyone in our industry knows, and I would view with great suspicion anyone who claims to know, what exactly this is going to turn into in the coming decade.

What I can tell you is, though, if we zoom out, the trend is clearer.

So in the same way when you look at a stock graph and you're not really sure, wow, it's up and down a lot today, let's zoom out, which is the right metaphor for us being in the New York Stock Exchange.

You zoom out, you hit the five-year button, and now you sort of see a picture.

So let's zoom out. How did this story actually go?

Long ago, not so long ago that I don't remember, there's enough gray in my beard.

I remember thinking that we used to call those people gray beards, and now I'm a gray beard.

To ship software, you would get a box, and you would name the box with the name of the application you were going to run on it, right?

So the name of the box was Exchange. The name of the box was Wiki or whatever.

And you would get that box, you would spec it, you would ship it, you would wire it up, you'd get a couple people to put some software on it, you'd make sure it's okay, you'd roll it out slowly.

The whole thing took months. And in fact, you always knew it took months because the vendor's software usually had a year or a season in its name because that's how frequently they were shipping.

And it felt pretty good until a company called VMware comes along and says, we've got an idea.

What if we create the ability to put an operating system in a single process and in a file?

And we can ship the whole OS to you as one file. Now, whoa, wait a minute, this opens up a whole new era.

This is way faster. Now I can actually just get one box, get everything right working in a virtual machine, test the heck out of it there, and then ship the VM.

It's not months anymore. It comes down to weeks.

Way more fluid. And then a company called AWS and others like it say, I've got a better idea.

How about we run that VM for you? And all you have to do is log into a web interface or use a command line tool and push a button and we'll stand it up for you.

You don't even need to worry about the box. And that's the basics of the beginning of the public cloud.

Once we have that, now you can have applications that you can ship up in hours or minutes.

But Cloudflare's picture looks like this.

We have that 194 data centers within 100 milliseconds of 99% of the human population.

We have a way bigger global footprint. So what does that mean? It means that no one in Cloudflare actually knows what server it's running on.

Our servers don't have names with the names of the software running on them. They have a long string of serial numbers.

And even the engineers are using a piece of software that manages those machines and how they're provisioned and how they come and go in the network.

So that's really the hallmark of serverless. It's not just that you don't know where the machine is.

Even the people who own the machine don't know what's running on it because a computer is scheduling all of this.

It's completely abstracted away from you. And that's a world now where you can start to think of the code that you're shipping is not a hulking OS.

It's not an application baked into a VM.

It's not even a container. It's the smallest sliver of application logic that you actually need to write to get your job done.

It's the few lines of code that you conceived the day you commissioned the project.

When you said, you know what I really need is a bit of logic that 301s this request, it's that.

Those few lines are what goes into the serverless architecture. And those few lines can be shipped in seconds.

And that's what we mean by the start of serverless.

If this is legitimate, though, what's actually driving it? Why now? The timing feels right.

It's been about ten years. That's about the time the pendulum clicks and you feel like there's another architectural shift coming in.

But why now?

Let's just gut check this for a second. Is there enough actually changing in our industry to warrant a real paradigm shift?

There are some things that have happened.

First of all, mobile phones are everywhere. Every single person in this room has a mobile phone in their pocket.

And our expectations on the latency on that device are basically a zero.

We don't expect to see a spinning globe or an hourglass or anything like that.

You touch it, we want it to react instantly. The whole world has been trained to expect low latency.

We all hate VPNs. Michelle was echoing that.

No one likes it. For a while after I left my job, I had my own little startup, and we had no VPN.

For seven years, I lived in a VPN-less world. When I came back to Cloudflare, I was sort of horrified, like, oh, my God, I forgot about this thing.

You need to stand up VPNs when you're on a big company. What a pain in the neck.

But fortunately, right around the corner, Cloudflare had been working on our identity access management product on access.

And as soon as it rolled out, I made sure that every service that I ever worked on was behind it because I didn't want it to log into a VPN again.

And that's also driving this new change here.

Everyone is showing up with a ton of phones. When the iPhone first came out, you couldn't install third-party software on it.

Now you can. It's an attack vector.

It's just as vulnerable as anything else. The bad guys know this. Of course they know this.

There's a billion of them running around. There's a lot to be gained if they can break in.

So that heterogeneity is present on the outside of the network.

And then there's actually definitely for sure a change in how we're building applications.

We have decided as an industry to go with microservices because it lets us think about the problem in a more detailed way and able to focus on the part that we want to fix.

So there are definitely these shifts that are coming.

And that is what is informing the vision for Cloudflare's serverless, which is we want to get to a world where there is no difference between writing the few lines of code that does the application logic and deploying it with latency to come to zero.

What does it actually look like? We call this product Cloudflare Workers.

Here's the only architecture diagram in my presentation. It's exactly what you'd expect.

It's a slice of the platform that runs at the edge. It gets, today, HTTPS events from browsers, from eyeballs.

And we are using the Chrome V8 JavaScript engine to let customers and ourselves write logic that runs in there.

And because it is a modern Chrome engine, it also supports WASM, which means any language which can compile to WebAssembly is in scope.

So that includes C, C++, Rust.

Rust has actually been a very interesting leader here. And then what does it do?

That logic can then connect back to the origin. It can talk to the Cloudflare cache.

It can talk to our KV store. Soon it will be able to have much more richer storage systems it can talk to.

It can talk to whatever microservice you have on the back.

It can do exactly what you'd expect a bit of logic that is running on a high-connected Internet application to be able to do.

So now the question is, what do you put there?

So we've got a new slice. Okay, great. You've given me yet another place, another headache, another thing I need to worry about, so some code should move there.

Why? Why should I move some of my code there?

That sounds like more work. Well, let's just think about the properties of these three tiers we have now.

If it's your origin, you can secure the heck out of it.

You can completely lock it down. You also can update it whenever you want. You're limited only by the speed of your own CI and your own internal processes.

But the bummer is the whole thing is high latency to the user, and you can't scale it indefinitely.

On the other side, the browser, the client, the thing in my pocket, that thing you can't trust.

Who knows what else is on it? People can install things.

Bad guys can install things. Users can install things. Also, you can't guarantee that it gets updated on time.

If you have to go through an app store or if you have to count on a user, even if it's a web app, you have to count on the user actually hitting refresh.

It's not at your discretion when the client code actually updates, but it is low latency to the user.

So that part's great. So here's the fun part about having code running in the Cloudflare data center.

We get the best of both worlds.

It's still secure. It's locked in. We're the only ones, you are the only ones who can control it.

There isn't any other way to get in.

It's fast to update, really fast. Seconds, single digit seconds. And it's still low latency to the user.

Now, it's not as low latency as the browser in your pocket, but it's way, way lower than the origin server.

Why? Geography. We're in 194 data centers.

The speed of light still has to travel. It has to get there. That 100 milliseconds and shrinking, that's now at the Cloudflare data center.

So now, you've got this thing where it's the best of both worlds and you can start to think, okay, so what can I migrate into the center layer that is currently at the other two layers?

And here's the question I want all of you to consider asking yourselves.

What if it was zero milliseconds? What if it was absolutely transparent?

Well, I know what I would say. If I was that, I was like, okay, then how much storage do I get?

If it's zero milliseconds and infinite storage, forget it.

I'm going to glue the screen to the edge. I don't even need a CPU at the client, right?

But that's science fiction. So we need to talk about something a little more realistic, which is why I want to talk about drones walking dogs.

So drones walking dogs is an important problem, at least our CTO believes it's a very important problem.

The idea behind this fanciful exercise is imagine that some hot new startup wanted to come up with a technology because people have dogs that need to be walked.

You have drones that could conceivably walk one. Imagine a startup that came up with the wacky idea of getting a drone to walk your dog.

Right away, I know the product managers in the room are like, I could write the PRD today, right?

It would be awesome. It'll be a little app. It'll tell you, you can say what kind of dog, what kind of drone, when it should pick you up, what route should it take, if you track how busy it is, is it uphill, is it downhill?

You can imagine a whole bunch of features, a premium plan, the enterprise plan for customer companies with a whole bunch of dogs, and it would be a great fun product.

You could have a field day with it.

And buried in there is a cool technical problem, because one of the things that happens when you have a great success with your drone walking dog business is that two drones are going to have two dogs on the same sidewalk at the same time.

And your dog hates that other dog. So now you have a coordination and routing problem, but a difficult one, and not one that you're going to be able to solve at the origin.

If you try to solve it at the origin by pre-planning all these routes, you're going to fail.

You're not going to be able to plan it enough, and there's too much changes that could happen in the real time.

You could also try to solve it in the drone firmware, which would be a pain to update, and would also not have the global view that it needs.

So where should you solve something like, how do my drones not step on each other?

So how do I not get my dogs to get mad at each other?

The perfect place to solve a problem like this would be the Cloudflare Edge.

Especially perfect because the Edge is going to have a localized slice of all the drones in that region.

So you don't pay the cost of infrastructure worrying about in far off places.

It's only where their application is actually running.

Where the problem needs to be solved is where you solve it. We're going through a transition.

We have to figure out what the answers to these questions are.

What in the client moves to the server? What in the server moves to the, sorry, what in the client moves to the Edge?

What in the origin server moves to the Edge?

And how does client software adapt for this? We've got a couple of examples.

One is rendering. Let's start with client to the Edge. The client does a lot of work trying to figure out how to draw the pixels that it wants to see.

The more the Edge does that, the less the client has to do.

The Edge could do it a lot faster.

People are not upgrading their phones as much. That CPU is going to not be as powerful as it was two years ago.

So how do you make sure that your app still loads on phones that are older?

You put more rendering in the sky. AI inference, face detection, machine learning, voice recognition, all that kind of stuff.

Too much power needed that you can reasonably expect to happen on a device.

But there's enough power in the cloud.

But not too far back. I can't go all the way back to the origin.

Again, excellent match for the AI inference. Server API calls. This one seems interesting because you might be like, well, come on, the one thing a client does really well is making server API calls.

Why do I need to put that in the cloud?

Because of batching. Because you don't want the server, the client, to make more than one request.

I just want it to reach up into the sky and say, tell me what I need to draw the pixels on the screen right now.

But if there's a microservice behind that and a restful API with a bunch of different endpoints, this is the place at which you aggregate that so that your architecture in the back is clean, but the packet that gets returned to the client is in one spot.

In the other direction, when you think about the origin server moving to the edge, there again you have rendering.

So rendering shows up on both sides, which tells us it's probably a good match for what should happen at the edge.

Authentication and authorization are a Cloudflare access product.

All of its edge components 100% built on Cloudflare workers.

And DroneDeploy, a real company, not a dog walking one, uses Cloudflare workers to do part of their authentication and access.

And then, of course, security. This is an excellent point.

The bodyguard is there. It's a very powerful layer at which point to actually do things like, wait, is this a bot?

Wait, is this malicious?

This is an excellent point in which to do that. How do I know? Because that's what our Cloudflare firewall does.

That's where we did it, too. That's how Cloudflare works.

So of course we know that's a great place for you to be able to do that.

Some of you might be thinking, I feel like he's talking about mainframes. Are we going back to the old era where the client becomes dumber again?

Has the cycle flipped again?

Are we back to thin clients? I don't know. Maybe. Difficult question to answer.

Again, I wouldn't trust anyone who answers any of these questions with great confidence.

I don't think we know. But this is true. It's definitely going to simplify the client, and that's a good thing, because the client is a pain to update.

It's going to make it easier to put more customer-facing logic in the sky.

It's going to lower the bill of materials for Internet of Things devices.

And it is definitely going to make it easier for us to support a heterogeneity of clients, which is also one of the biggest headaches of developing for end applications.

Now the benefits of Cloudflare's workers product is coming into focus.

The performance, the developer productivity, again, it's in JavaScript, a language you all know.

Everybody knows. It's easy. It's used with the Service Workers API.

The whole thing is built on very familiar technologies, the global scale and the low cost.

Why low cost? Because it's only running a fraction of the system.

You're not renting a whole server. It's just a tiny sliver of compute.

But that's enough. That's the only part you actually needed. But here's the thing I really wanted to make up the point of.

If you go back to when the mobile revolution started, the first device that had an IP address that you could actually carry on in your pocket, that a consumer could carry around in your pocket, came out in 97.

And what was the application we were all so excited to get in our pocket?

Calendar and email, which now sounds unbelievably unimaginative. How lame.

You came up with an Internet in your pocket, and the only thing you could come up with, the best you could come up with is I want my Outlook and my email in my pocket.

That's it? That's all you have? What about social networks? What about ride sharing?

What about traffic routing on the Internet? What about health monitoring?

What about all the richness that is on our phone today? We didn't conceive it.

If someone had told me in 1997 you're going to use this device to summon a stranger and get in their car and that's how you'll get to the airport, I'd be like, you've got to lay off whatever it is you're actually on, dude.

But that's where it led.

So now I'm telling you there's a new sliver in the architecture. There's a new sliver in the architecture.

There's a new place for you to write code. And I'm telling you, you can write performance and security code there.

I know because I did it.

That is lame. That is only, that's the best I can come up with right now.

And as a thought of new herder, the only thing I came up with was dog walking drones.

Something new is going to open up here in the coming decade. That's what's actually going to be why this story is going to be interesting 10 years from now when we talk about what was so interesting about an edge network.

It's what goes in the bottom right.

We don't know what it is yet. I'm counting on you guys to help us figure it out.

Okay. And now science fiction. What have we actually built?

So we started with a compute layer that was announced in 2017 and became GA in 2018.

Immediately, people loved it. Great. I can run stateless stuff at the edge.

That's not as interesting as being able to have a little bit of storage. So we added a key value store and then worried a great deal about what is the actual developer experience.

What do we mean by dev experience? What we mean is all the things that make it easy for an engineer to actually exercise this technology.

So that means command line interfaces, a free tier. You don't need to do anything.

You don't even need a website. We will set that up for you too. You don't need to read the hello world books.

We've got great templates. There's a new thing to let you run a static site.

So if you're trying to come up with an application to run, we've got great examples for you.

And all of this are some of the things we're going to have in the sessions later this afternoon.

So I encourage you all to take advantage of them.

Some of the people who actually built all that stuff at Cloudflare are here today to talk to you all about it.

What's coming tomorrow, figuratively tomorrow, we're not actually announcing anything tomorrow, is monitoring.

That's the big question. You open up a new tier in the stack, everyone wants to say, so what is it actually doing?

What is it doing?

How do I know that it's working? How do I know that it's not taking up too much, too many resources?

That will be coming. And then coordination. Yes, it's a sliver.

As soon as you have one sliver, you're going to want to have them talk to other pieces of technology.

How do you make sure these isolates talk to each other?

Can they coordinate with a queue? That kind of logic is coming as well. To tell you that this is not just something we cooked up, there's actual real customers like yourselves who have solved real problems, I do want to just wrap up with a few customer stories.

We're really proud of the customer logos behind it. Some of these are really forward-thinking companies and they have immediately grasped on what Cloudflare Workers could do.

And I just thought I'd tell you a couple of those stories.

Discord is one of the world's largest voice and text chat platforms, as you guys, you may have heard.

It's particularly the leader when you're talking about how gamers communicate with each other.

And Discord had the interesting problem of, we want to launch a game store.

So think about games. Big, fat, big, giant files that need to be downloaded.

Sort of the poster child for a CDN.

But it's a store. I only want to let you download it if you've actually paid, right?

And furthermore, I update these games a lot and I also want to make sure that you only have to download the bit that you actually need right now.

You can imagine, right now as I'm describing this to you, it's probably, it's not that complicated.

It just needs a little bit of authentication and a little bit of sharding so that I can actually give you, A, are you allowed?

And B, if you are allowed, here's the slice of the giant new game that you get.

Done in workers. Immediate success.

Scaled right away. Cordial, e-commerce site. Used to sending out giant emails on Black Friday.

Wanted to send out an email with a barcode in it. An actual barcode that you could print off and scan and take into a store and get a rebate for.

But they're like, man, we're not in the barcode generating business. Is this a new service?

Do we stand this up? Do we commission a whole team? Do we run an origin?

Do I commission servers? Someone said, no, let's try Cloudflare workers.

Let's put the entire application logic in Cloudflare Workers. Let's generate that thing.

There it is. Rendering, right? Rendering done in the sky back to the client.

The whole thing was built in workers. Nothing had to, there's zero origin footprint for this.

NPM, which is the giant JavaScript module repository.

They also had a similar problem in the sense that lots of packages, mostly famous for public packages, but they also have private packages.

Private packages, again, need authentication. There's that theme again. Authentication at the edge.

So making your CDN smarter by having an authentication layer that is at the edge implemented in JavaScript, which of course they absolutely loved because their previous provider was using a language that nobody in their company knew.

Now you switch to a language that everyone in their company knows and it becomes that much easier.

And the fourth customer is such a dead match that they built a video for it.

Can we actually play this video? Optimizely is the world's leading experimentation platform.

Our customers come to Optimizely, quite frankly, to grow their business.

They are able to test all of their assumptions and make more decisions based on insights and data.

We serve some of the largest enterprises in the world.

And those enterprises have quite high standards for the scalability and performance of the products that Optimizely is bringing into their organization.

We have a JavaScript snippet that goes on customers' websites that executes all the experiments that they have configured, all the changes that they have configured for any of the experiments.

That JavaScript takes time to download, to parse, and also to execute. And so customers have become increasingly performance conscious.

The reason we partnered with Cloudflare is to improve the performance aspects of some of our core experimentation products.

We needed a way to push this type of decision making and computation out to the edge.

And Workers ultimately surfaced as the no -brainer tool of choice there.

Once we started using Workers, it was really fast to get up to speed.

It was like, oh, I can just go into this playground and write JavaScript, which I totally know how to do.

And then it just works. So that was pretty cool.

Our customers will be able to run 10x, 100x the number of experiments. And from our perspective, that ultimately means they'll get more value out of it.

And the business impact for our bottom line and our top line will also start to mirror that as well.

Workers has allowed us to accelerate our product velocity around performance innovation, which I'm very excited about.

But that's just the beginning.

There's a lot that Cloudflare is doing from a technology perspective that we're really excited to partner on so that we can bring our innovation to market faster.

So there you heard it, right?

The same themes, just works. JavaScript, I already know this.

10x the volume, way cheaper, right? Performance, scale, productivity, lower cost.

And that was it. Thank you so much.

Thumbnail image for video "Cloudflare Connect"

Cloudflare Connect
Connect to the future of networking and security. Cloudflare is a global network designed to make everything you connect to the Internet secure, private, fast, and reliable. Connect is Cloudflare's flagship event that will connect attendees directly...
Watch more episodes