Building Secure Enterprise Networks with Cloudflare
Presented by: Patrick Donahue, Rustam Lalkaka
Originally aired on June 9, 2020 @ 7:00 AM - 8:00 AM EDT
Best of: Cloudflare Connect NYC - 2019
Historically, building enterprise networks has been a costly, complex and difficult to manage effort. Modern trends including software-defined networking (SDN) and network function virtualization, strive to revolutionize the way enterprises build and operate on-premise and cloud networks.
In this session, Cloudflare Directors of Product Patrick Donahue and Rustam Lalkaka demonstate how Cloudflare is implementing these new networking techniques to build products that will - lower the total cost of ownership (TCO), enhance the security posture, simplify management and increase the performance - of enterprise networks.
English
Cloudflare Connect
Transcript (Beta)
Music Hello, everyone.
Hope you all had a good lunch. I'm Rustam Lalkaka. I was on stage a little earlier talking about Magic Transit and Spectrum.
We're here to go a little deeper on that.
I am the director of product that manages our performance and networking teams at Cloudflare.
Hi, good afternoon. Thanks for coming today. My name is Patrick Donahue, director of product management as well.
I focus on our security products.
I work very closely with Rustam. His team helps bring your bits performantly and reliably through our network.
And my team helps secure those both to your origin servers and infrastructure as well as to the eyeballs.
Why is my head so much bigger than yours on this slide?
I don't know. I think you're a little smarter than me, maybe.
We'll see. So as you heard Michelle say earlier, our mission is really to help build a better Internet.
And so what does that mean?
That means building products that are secure, reliable, and performant. So we want to talk to you about some of those products today and go a little bit deeper than we touched on earlier.
So we have a pretty ambitious agenda. This is actually a two-part session.
So we're going to go through part one, the first 30 minutes, and part two, the second 30 minutes.
We're going to start out with a demo of some of our layer 7 capabilities.
So we've been really investing. And you heard Usman talk earlier about where we started.
We've been investing considerably in this control infrastructure to allow you to take your security posture and put it at the edge and enforce it in front of your applications.
Then we want to talk to you about how we're going to take this down from layer 7 to layer 4 and layer 3.
And so Ruslan's going to take you through some of the technologies that have developed over the past decade that really allow this to happen.
This is not something that we could have done a decade ago.
But thanks to some of the advancements, he'll take you through, we really can do this today.
And then we're going to go deep on three specific products.
First one is Cloudflare load balancing.
Second one is DDoS mitigation. And the third one is Cloud Firewall. And this is a little bit more forward-looking.
The first two are live today and being used by customers.
We just onboarded one last night. And then we're going to go a little bit to the future.
So let's start with a demo. But before we get into it, I want to tell you about our Firewall platform.
So it's becoming the configuration plane for all the security services within Cloudflare.
So you see the rules engine today.
It's really flexible. We're adding capabilities to it, both on the matching side as well as the action side.
You're going to hear today from Sergi, who's going to talk to you about bot management.
Bot rules are configured within the Firewall.
You're going to hear from Urtifa, who's going to talk to you about Zero Trust.
We're starting to think about putting some of that stuff in the Firewall.
And we're continuing to build capabilities on it. But we want to take that from layer 7 down to layer 4 and layer 3.
And I want to show you first some of the new capabilities we have.
And then we'll talk about TCP and UDP and IP. So to set up the demo, it's a pretty simple website.
Imagine that you're running a site.
You're starting to see some attacks. And so I want to take you through identifying those within Firewall Analytics and Events, which is a tool we're really proud of.
And then I want to talk to you about creating a report of that and actually blocking that.
And so this is all stuff that exists today. And then I'm going to give you a preview of something that we're releasing a little bit later this year, which is ability to go deeper on the actual content that's flowing through, not just properties of the request.
So to begin with, this fictitious business accepts requests from all over the world.
Imagine just taking messages that are posted to it.
And so my live demos don't always go extremely well. So I've got this pre-recorded.
I'm going to narrate it for you. And happy to go deeper on this as well as anything else after the talk.
Great, so you can see here, this is our firewall events.
And it looks like we have a spike of block events here.
And so we see seven and then another 20. And so I want to drill a little bit deeper.
So it looks like most of these are coming from the WAF.
And so we're going to click into WAF. And you can see the chart above changes.
I'm scrolling down to see the top events by source. We see, unlike Michelle mentioned earlier, Canada does seem to be the culprit here.
So we're getting a lot of requests coming from Canada.
If I want to drill down into the specific events, I can see in our activity log here that it looks like the attacker from Canada is trying to exploit a CVE that was recently released.
And so this is something that's now available in an RSS feed.
And I've got a little RSS feed installed here in my browser.
You can see rules that are both upcoming. So we're giving you a heads up of new rules that are coming, as well as ones that are recently released.
So I'm going to click in here and see this takes you to the developer dashboard.
As you can see here, this is a CVE.
We actually saw this on the 27th. I was on a flight. And I was talking to our threat analysis team.
And they actually saw the CVE come out, as well as exploitation attempts being attempted immediately.
And so when Usman talks about virtually patching, this is really what he means.
We were able to quickly write a rule and protect against this attack.
And we see here that this attacker is attempting to exploit it.
So I'm just going to go ahead and print a quick report here.
And this is our first foray into reports. And so you'll see a upcoming report library that's going to be coming in the next quarter or two, where we give you much more richer reports that you can schedule.
But I've got what I need here.
I'm going to go ahead and move to the rules engine where I can actually block this.
So I just want to point out one thing here. I've got what we call sparklines.
And so we added this recently to the rules dashboard. So as you write a rule, you can see, if you're familiar with a firewall that has counters, this is akin to that.
So you can actually click into that and see specific requests matching a rule.
And Sergi will talk about this. And as well, as we launch more and more on the bot side, you'll be able to get more detail on false positives.
So I'm going to pause here.
We've created a very, or we're about to create a very simple rule to block this.
But we've identified what we want to block. And so if I go here, now I have a browser set up.
I guess this is on the left side on the screen of a very simple machine that's routed through Toronto.
So this is a DigitalOcean machine in Canada.
And I've got a worker actually responding to the request. So there's no server behind it, or no server needed.
In this case, it's actually responding from the edge.
And so we're going to get a really simple rule, block Canada. Very nice.
Yeah. Very descriptive. And so just this is the visual rule builder, where you can go in and actually type in a country, multi-select.
This is drawing off of the GOIP data that we have that we collect at the edge.
And I want to show you we can actually test this rule.
And so before you deploy a rule, you want to be very careful that you're not actually going to block legitimate traffic.
And so I can click Test Rule here.
And this is going to go back historically based on the event data we're collecting.
And we're actually going to show you what this rule would actually change for a deploy it.
So go ahead and click Deploy. And unlike the anecdote we heard earlier about 45 minutes, we deploy rules a lot quicker.
So I'm going to slowly scroll over here to the left.
Go ahead and send a request. And hopefully, it'll be blocked.
Perfect. So access denied. So that rule from this is real time, time of click, deploy to time of request is starting to get blocked.
But this is a pretty generic rule, right?
This is not something that is that realistic. We want to imagine we want to actually open this up and look at the request body and say, OK, they were blocking the WAF before.
But we actually want to get a little more specific and actually inspect the bytes flowing through the network.
So to do that, we're going to go ahead and disable this.
Just actually, we're going to edit this rule.
And we're going to go ahead and modify it. And so this is a capability.
It's called Body Inspection. That's going to be your payload inspection.
It'll be available later this year. And so we're going to say, hey, we want to block it unless it's good news.
Maybe the Raptors are in the playoffs. And we want to hear if they're winning the championship.
And so I'm going to modify this rule using the expression builder.
And this is our expression editor. This is based on a Wireshark type language where you can write rules that are based on properties of the request.
And now, we're basing it on the body of the request. And so I'm adding a function here.
So I'm lower casing the payload. And then I'm saying matches, which allows me to do a regular expression.
And this regular expression engine is extremely performant.
We've recently moved our edge to use Rust. As Usman talked about earlier, we had the Evolution, Nginx, Luigit.
And we're doing a lot with Rust today on the edge.
So I'm going to go ahead and relax this rule. And you can see the sparkline's hitting.
And so I'm going to send a post now. And I should be seeing it blocked unless the specific payload I want to see is allowed.
So this is more of a positive security model, a sort of a whitelist.
We can see here that it's blocked, because the Raptor's lost.
And that's not good news. So we want to go ahead and change this.
And I'm matching with a regular expression on win or won, different character sets here.
And so both of them, you can see, have 200 for the first request.
And the second request should work as well. So go ahead and submit that.
Great. So really simple rule, really flexible, quick to deploy.
But you can imagine there might be a little bit more complicated use case. And I'll take you through one quickly.
It's a really powerful engine. And whatever you feel like doing with it, you're welcome to do.
In this case, we can imagine there's a credit card issuer maybe that's compromised.
And so changing your application and pushing code, running through your continuous integration and deployment systems in your test suite, that takes time.
And so it used to be you'd have to go and write a rule, and this is what Jen spoke about earlier, of a custom WAF rule.
And so you would literally write in a freeform text box. This still exists.
We're working to get rid of it. You would write a request to a human.
And so someone on our side would receive this. They would try to understand what it is you're communicating.
They would try to write modsec to match this. And from an SLA perspective, because we had to have a human handling this, this could take a couple days to actually turn around.
And this is not the speed at which we want to allow you to manage your security posture.
And so instead of doing this, just the example here is maybe the first four of the credit card number.
We've determined maybe our internal systems flagged it fraudulent.
Rather than actually asking someone to do it, I want to show you how you can match based on a form value now that we're actually parsing and allowing to interpolate into your rule.
So I'm going to go ahead and create a new rule here. I'm going to disable that old one, because I'm still using the Canada test site.
And I'm going to create a rule to block based on the credit card issuer.
And so when we get a request, we can run the raw body, if you like, against some sort of regular expression.
We also will parse the form values into data structures that you can interpolate into your rule.
And so as you may know, a GET request or a POST request, the same query string can be repeated multiple times, the same key.
And so in this case, we are going to match all of them.
And so we have a new capability. We've been adding functions to this library.
Some of you may be using workers for this purpose, and that's totally great.
You can use a worker to do really, really complex stuff. A lot of the customers that I speak to say, hey, it would be great to do this a little bit easier.
And so we're building up this function library to be able to let you do that.
And so you see here I'm writing. I'm not sure if you can read it, but it says I'm going to match on any request body form.
And then I'm going to have brackets to pull out the credit card field in the payload request.
And so if that matches a caret to start, again, this is a regular expression.
It starts with 4, 3, 2, 6. We want to go ahead and block that.
So I'm going to deploy this rule. And this goes through the same deployment engine than any other simple rule does.
It pushes to the edge in about five seconds globally.
So we're going to go ahead and post to the specific path we're looking at.
This is the shopping cart path, the buy endpoint. And I'm going to put in a good credit card here, just a string that doesn't go through any other validation but should be accepted.
I can see that that's played back to me by my worker, gone through successfully.
And now I want to change it to a bad issuer and just demonstrate the matching capability here.
Great, so we're blocked.
So as you can see, this is really powerful. It allows you to write rules that you'd like to match on.
We are doing a lot with our web application firewall as well as our firewall rules today.
So we just released the RSS feed recently.
The functionality I just showed you, you can match on arbitrary headers too.
So it used to just be predefined headers, user agent, things like that.
A lot of you have custom applications that you've told us in the custom WAF rules.
And this is sort of where we've gone to develop this. We've gone through, what are the things that you're asking us to do by hand that we should let you do automatically?
And we've come up with this list. So we're focusing on delivering that layer 7 control you saw down to the network layer.
This should probably look familiar from Rustam's presentation earlier.
We started out layer 7, layer 6.
Layer 6 is kind of the TLS layer. It's up for debate. If you'd like to debate, I've got a few hours after.
We can talk about where that fits. But we're bringing this down to layer 5, layer 4, layer 3.
And so protocols, TCP and UDP, being able to act on those and write rules to act on them.
IPsec traffic, a lot of the enterprise customers we talk to have IPsec tunnels.
We'd love to help you replace those with access, but we understand, for the time being, they need to flow through your network and be protected from DDoS attack.
And then other protocols at IP, including GRE.
So I want to set the stage, and I'm going to hand over to Rustam to talk about some of the technological advancements that help us evolve this network.
I want to talk about the typical rack that we see. So Rustam and I are here in New York a lot.
We meet with a lot of customers. Both in previous jobs, we've spent a lot of time in data centers taking a peek at what's out there.
And so most people that we see, of course, have some sort of router, right?
Can't really do a whole lot putting stuff on the Internet in a data center without a router.
This is probably a Cisco or a Juniper device. Anyone have a bug or support ticket they've ever opened with Cisco or Juniper on a router?
Our head of network engineering, if you follow him on Twitter, he struggles sometimes to get responses.
And so routers are great. They're necessary, evil, but sometimes getting the software stack takes a long time to update.
And so there are physical devices that are there.
Of course, you have a switch to connect your router to the rest of your network.
Then you have a WAN optimizer. We see a lot of these. These are devices from Riverbed that are shipped out.
They help control sort of what goes on to the network and maybe stuff comes in.
And so they try to do things around compression.
They don't have a lot of capabilities to actually control the network, right?
They don't have control of the underlying network, whether it's the Internet or some private circuit.
And then DDoS mitigation devices. And so we see a lot of Arbor, a lot of Radware.
A lot of these devices were put in place as ways to filter what is actually reaching your network.
Again, the problem is they can only control what shows up on your doorstep.
They can't actually control it getting there.
And so you typically buy these in some finite capacity. You put them in.
For redundancy, you probably have two of them. And then there's a firewall, right?
And so I say firewall slash VPN here. When I first got started in this industry, you would buy some separate VPN concentrator.
These days, you may have an ASA or something from Palo Alto, which combines the functionality.
As Jen and others have talked about earlier, we kind of got rid of our VPN or a need for a VPN at Cloudflare.
And so we replaced that with Access, not only for HTTPS applications, but for SSH, for our SRE teams and others.
But there's still a firewall need, right?
And so this is network level controls, layer 3, layer 4, to actually control ingress and egress to your network.
Of course, web application firewall, we see a lot of F5 devices out there, maybe in Perva.
And then load balancers, and so a lot of these are F5 as well.
Typically, we'll see TLS termination done on the load balancer.
And what does that mean? That means where you're terminating that connection and you're decrypting, inspecting the traffic.
The challenge with that is that even with TLS 1.2, there's still two network round trips to make.
And so if you're going all the way from the eyeball to the data center and back two times, that adds a lot of latency.
And so a lot of as we come back to this, a lot of what we're trying to do is make sure we give you the security you need without any sort of performance trade -off.
So real quick, what does the network look like in this case?
Of course, the Internet is there. We're seeing this sort of rack deployed in the data center.
Again, everyone's looks somewhat differently.
And then if the data center is the core of it, you're adding your headquarters and you're adding your branch offices.
And you're trying to route this through typically the data center as a security choke point.
And so you've bought these big, expensive boxes that perform a security function.
You want to leverage them as best as you can and enforce a security policy in one point.
The challenge with that is that you're adding latency by routing from your offices through your data center.
And so you typically look to expensive MPLS lines or other sort of point-to-point circuits that are expensive to try to give you some performance and try to win some of that performance back that you're trading off.
The broader concern here is that you don't have a unified policy across these.
You've got teams that are maybe working with a different firewall in the data center than the branch office.
And so you have a lot of overhead from a configuration perspective.
You don't have the concept, typically, of sharing objects or object groups that are defined there.
Some of the Band-Aid box providers are starting to get into that.
But it's very difficult to go from a box up to a cloud solution, as we've seen.
We want to talk to you about a better way to manage your network and how we're transforming securely.
And so I'm going to hand it over in a second. Forgot one thing here.
The last thing is you have business partners are connecting to you, typically, with IPSec VPNs as well as mobile workers.
So I'm going to hand it over to Rustam, who's going to tell you about some of the technological advancements that have been made that allow us to sort of rethink what this rack looks like.
Thanks, Patrick. So before we go deep on specific products we're building to sort of address some of the problems that we're talking about here, we thought it'd be useful to talk about some industry trends that have allowed us to do what we're talking about here.
So earlier in the day, you heard Usman talking about the rise of serverless computing.
We're going to revisit that rise of virtualization for a second because I think it's important to illustrate some of the trends that are happening in networking now.
So virtualization started becoming cool in the computing space about 15 years ago.
VMware came on the scene and said, we have a new cool trick.
We can put operating systems, stack a bunch of them on one piece of hardware, and that really changed a lot of things.
So to go back, the first generation of sort of computing hardware in your data center was one application on one machine.
As Usman said, Exchange runs on this machine, that machine needs to go in the rack, and then Exchange is running in your data center.
Super inflexible, kind of a pain.
Months and months were required to actually stand up new hardware and new applications, hard to manage.
No one wants to go back to this world.
Once VMware came on the scene and virtualization became mainstream and hypervisors became a word that people actually say, we went from one box, one application, to one box, lots of applications.
And this shortened deployment cycles from months to weeks.
So a lot better, but by no means as good as it can get.
What virtualization really allowed, though, was the rise of the public cloud, or private cloud.
And so this allows you to stop thinking about boxes at all and just provision computing capacity as you need it.
So this has driven drastic changes in the way you think about deploying computing infrastructure.
Instead of thinking about a data center full of computers running your applications, you can think about computing in a sort of elastic, on-demand way.
Enter the public cloud. The public cloud is now a part of the conversation when you're talking about deploying computing infrastructure.
Potentially even multiple clouds.
You might want to reduce vendor lock-in and avoid putting all your eggs in the Amazon basket and try and spread some things across Microsoft and Google and others.
This makes things really hard to manage. So this diagram that Pat just walked through is already pretty complicated, already pretty difficult to manage.
And now, instead of making this simpler, what we've seen with most customers is that they've added a column here.
They've added the public cloud.
It doesn't actually replace anything they already have deployed, at least not yet.
And then if you add another cloud on top of that, this gets even worse, right, from a management and sort of security posture perspective.
So what does this all mean for networking?
Well, networking is just starting to catch up with these trends in virtualization that have hit computing a while ago.
A couple years ago, a very influential paper was published talking about something called network function virtualization.
This is just a jargony way of saying people think that network devices, physical network devices that perform a specific network function are going to be virtualized in the same way that computers work.
Cloudflare was founded right around when that paper came out and we were able to take that thinking and those concepts and use it to build our network.
We sort of embraced that from day one.
We applied all the SDN, software-defined networking techniques we could think of to help grow and scale our network, and we became experts in how to do that.
We're excited to take that expertise and help you bring that agility and sort of evolve your enterprise networks.
So that expertise in software-defined networking is how we take commodity x86 hardware.
This is an actual server in a Cloudflare data center that we pulled out of the rack.
It's actually four servers.
An important thing to call out here is that this server is very similar to what your laptop is, right?
It's beefier, it's more reliable, it's bigger and more expensive, but at the end of the day, it's just an x86 computer running Linux.
We take computers like these and our network footprint like this to replace boxes that traditionally, replace functions that were traditionally filled by expensive hardware boxes like Cisco routers, Arbor DDoS, and F5 load balancers.
We replace those all with generic x86 hardware running really smart software.
And so this allows us to scale our network very quickly, provide services to you at low cost with good performance, reliability, and security.
It allows us to grow our footprint without buying specialized hardware every time, right?
We just rack and stack more x86 hardware and make sure the software's running and everything works.
Usman also talked a little bit about that this morning. So the important thing to take away here is that just like your parents told you when you were little, it's not what you look like on the outside that matters, it's the sort of smarts inside.
We really, really believe that's true. Where's my, oh. The software running on those servers is what actually makes networking smart, right?
You don't need expensive hardware performing these functions to do a good job.
You need really smart software. So some of you are sort of familiar with software-defined networking through the concept of software-defined WAN.
This is really the first real application that's hit the mainstream in enterprise networking of software-defined networking.
I think it's a little funny that software -defined WAN refers to a piece of hardware that helps you pick which Internet path to take out of your office.
We really want to software-define everything, whether that's your WAN, your core data center, your cloud, et cetera.
More importantly, we want to take data from all of the traffic flowing across our network, whether that's threat data or performance data, to make that software really, really smart.
So as an example of that, we have what we call Argo, and we often call Argo the Waze of the Internet.
So I'm sure all of you have used Waze or something like it. It collects data on real cars, real drivers driving around the Internet, or sorry, not the Internet, roads, old-school Internet, and figures out which roads are congested, which are fast, which crazy route drivers should actually take to ensure some consistent quality of commute or service between two points.
We do the same thing, but on the Internet.
So Argo takes data on traffic moving across the Internet and figures out which routes are congested, which are not, and by doing this, we're able to get the best of the Internet, good performance and low cost, without the bad, without the inconsistent jitter, latency, et cetera, that really potentially hurts business applications.
So all that data lets us understand what threats and performance on the Internet look like.
And so you can kind of think of Cloudflare as a big software-defined network with a bunch of data and machine learning expertise to make that software smart.
So let's revisit the rack that I showed you earlier.
This should look pretty familiar.
One thing I didn't mention before is a lot of these boxes have things in common.
They're big, they're expensive, they have some sort of ethernet connectivity, typically.
You probably have different support contracts with each of them.
Really, the thing that we focus on is the release cycle. So the release cycle for these boxes is measured not in days or weeks or months, but typically years.
And so a lot of customers we talk to, in some cases, have boxes that are end-of-life in the field.
We actually just onboarded one last night with that. But having boxes that are outdated, especially in different data centers, is very difficult to manage.
And so as Russell mentioned, we are using software plus data to redefine this.
And so I wanna talk to you about what that means for each of these boxes.
And then I wanna talk to you about what that looks like and how we're helping you on that journey.
So let's go back to WAN optimization. So we talked about the fact that just putting a box and choosing a path to go out to the Internet is not gonna be that helpful.
Maybe you can compress the protocol, maybe you can do some other optimizations at the TCP and UDP layer, but you can't do a whole lot about the underlying network.
And so we've introduced something called Magic Transit. And so we've been onboarding customers to Magic Transit, and they're already seeing an incredible performance pickup.
And so they're coming to us largely for security reasons, but they are finding increased performance on their network.
And we haven't even added Argo to it yet.
Argo is something that, Argo Smart Routing, as Russell just mentioned, the ways for the Internet, that we're layering on top of Magic Transit.
It's just the sheer size of our network and the connectedness of it that it's allowing us to have these performance improvements to begin with.
And so we are looking forward to letting you connect your data centers, your offices, and your mobile workers really efficiently together and reducing latency and jitter and driving performance up.
So the next box, the Arbors and the Radwars of the world, we're using the same Magic Transit technology to connect you to our network.
And we're gonna be giving you controls around the network protocols that are passing through that.
And so today, we can onboard you and protect your entire infrastructure from attack.
You can communicate those rules to us, and this is sort of how we launch products.
We'll launch them with, we'll get them out the door, we'll let people start testing them.
There's some humans involved typically to help configure them, but we never wanna stop there.
We want anything that's defined from a configuration perspective should be exposed to you.
And we're gonna talk in a little bit more depth in a second about what that's gonna look like.
Some of the charts I showed you earlier, some of the images I showed you earlier will look quite familiar there, and that's by design.
The other problem with these boxes is that you are typically paying for some capacity that you're not necessarily using, right?
So you need to pay for port speed. You may have a one-gig circuit where you're using 100, 200 megabits, and you're spiking to that.
You need to buy and pay for boxes based on that from a licensing perspective.
We don't think that's right.
We think you should only pay for what you're actually using. Next up is firewall and VPN.
So you'll hear a lot more later today about Zero Trust and how we're replacing the VPN capabilities with Cloudflare Access.
We wanna actually move into the enterprise firewall market.
And so we're initially building you the controls you need to, from a magic transit perspective, connect your network and protect it from attack.
But we also wanna start moving down where Cisco and Palo Alto and others are doing.
And so we're gonna start marching up that chain of functionality, and I'd love to chat with you about what it would take to actually remove a firewall from your data center.
What do you need from a software-defined network perspective plus data to make that happen?
And so we're really excited about that, and we're gonna talk a little bit more about that in a second.
Web application firewall, once in a much time there, we kind of covered earlier.
I will say that we are hearing great feedback in terms of the way that you would like to configure this.
And so we're starting to kind of decouple some of the configuration and let you define things based more on policies.
And looking forward to sharing news on that soon with you.
And then lastly is the load balancer. And so we're already, today, we have really smart load balancing capabilities.
You can do this out at the edge.
For layer seven, we can, layer six, we can terminate TLS and route appropriately to you.
And we're gonna start to bring that down to the network layer.
And Rustam's gonna talk to you in depth about what that looks like and how we're able to do that.
So let's go back to the network that we had before.
So of course, the Internet hasn't changed.
It's got a little bit faster, more reliable, and that's actually what's allowed us to build a lot of these technologies.
Previously, you were paying for these expensive lease lines.
Now you're able to route this over the Internet using our smarts, and in some cases, our private backbone, where it makes sense.
So you can connect your data center today to Cloudflare with a Magic Transit perspective.
We will displace your transit provider, both from protecting you on attack, but also giving you transit services on the egress.
And so you can do that today, and you can eliminate a lot of these boxes that are in your data center.
So if the choke point is, you no longer have the choke point, rather, if you have the software-defined security controls, that frees you up from this latency hit that you had before.
And so you can connect your headquarters in the same fashion.
So we can give you a transit connection in your headquarters, allowing you to configure things in software, not having to configure multiple systems, multiple firewalls, WAN optimization, things like that.
We're replacing that with Magic Transit with Argo smart routing.
Likewise, we are doing this for the branch office, as well as for your business partners and mobile workers.
They can use Cloudflare Access to connect, rather than having to use IPsec VPNs.
And of course, your cloud providers.
So you have routers today in the cloud. They're often virtualized versions of hardware routers.
We're soon gonna be onboarding customers that wanna connect securely from their data centers to cloud over Cloudflare's network, so that they can define security controls in one place at the edge, rather than separately.
I think I saw an AWS employee tweeting about how difficult it was to configure security groups for their own software.
You might have to do security controls for several different versions, and this is a way that you can do it in one place with us.
So as a goal, we are looking for a single place to manage and enforce security across your enterprise network.
And this is all done with performance in mind.
We truly believe there should not be a trade-off. And so Ruslan's gonna talk to you about how we're doing that with load balancing, and then I'm gonna go into a couple other products.
Thanks. So let's talk about how Cloudflare is working to replace hardware load balancers using our global edge network and software running on it.
So before we go deeper there, let's talk about why historically it's been a bad idea to do types of things we're doing on computers.
So traditionally, doing networking at high speed on a computer was really, really slow.
It was basically impossible. That's why these purpose-built switches and routers and load balancers existed in the first place.
And so to understand why, let's look at a very, very basic architecture diagram for a computer.
So on the left, you have what we call the kernel, and the kernel sort of houses the network drivers and all the things actually talking to hardware and to the network.
On the right, you have user space, and this is where things like your web server, whether it's Apache or IIS or whatever it is, is running, things like your database software, et cetera.
And so when network data comes into your computer, it goes into the kernel, it's handled by the network driver, and then if it has to go to an application that's sort of actually executing logic, it has to get copied from kernel space to user space, and then the reply goes from user space back into kernel space and out.
The problem here is every time that magenta line crosses that sort of barrier in the middle, that's a really expensive context switch in computing terms and sort of incurs a significant latency penalty.
So the reason networking on a computer has traditionally been slow is because it has to copy the same data over and over again for safety reasons.
So to address this, Cloudflare actually invented new techniques, the techniques, on Linux to do modern high-performance networking.
This is something called the express data path, and it was sort of born out of some ideas we had around how we could squeeze more performance out of that diagram.
So what express data path allows us to do is write really, really high-performance, safe code that runs in the Linux kernel, avoiding that context switch.
And this allows us to build complex networking logic running at wire speed, running on commodity hardware.
I mentioned safety earlier, so the BPF, which is the virtual machine that actually runs the code that we're talking about here, enforces all sorts of guarantees to make sure that code doesn't do weird things in the kernel, but it also allows you to load code at runtime.
So we can change the behavior of our devices without restarting the computer.
So one of the things we built with this is something we call Unimog.
So Unimog, in real life, is this go-anywhere, indestructible truck.
In Cloudflare land, it's a code name for a really, really high-performance, indestructible TCP and UDP load balancer.
It's built using Express Data Path, which I just spoke about, and it moves packets from where they land to where they need to be in 60 microseconds or less.
And when I first saw these charts that showed this data, I was like, wow, that's kind of slow.
60 milliseconds is not that great. Then someone reminded me that there's a thousand microseconds in a millisecond.
So that's how we do load balancing at the edge.
That's how Magic Transit's load balancer actually works, and we're excited to bring it to all of your data centers.
Thanks. So using those same technology, Express Data Path has allowed us to take what was traditionally done in a subset of scrubbing centers and actually do it everywhere at the edge.
So I want to talk a little bit about how we are working with Magic Transit and DDoS protection to actually eliminate the need for these DDoS appliances.
So similar to what Rustam just mentioned, it was very slow to do this because you had to actually bring this data to some centralized location.
And why, if you think, why was it actually designed that way?
It's because these companies were buying very big, expensive boxes, and they needed to put them in some centralized choke point because they couldn't run these operations at the edge because they were using expensive networking hardware.
They didn't have these Linux networking fundamentals that had been developed at Cloudflare to actually enforce these policies.
And so this was built, this whole technology was built on Express Data Path.
We're able to onboard customers in hours rather than days or weeks or months, which it would take to actually ship equipment out.
And we're able to mitigate attacks that are much larger than they fit down a pipe.
So I want to talk a little bit about how this actually works.
And bear with me if you can't see this. I'll talk you through it. But the way that we're ingesting traffic today is using something called Anycast.
And so we take your IP space, we announce this from all of our routers at the edge, and we are ingesting and attracting your traffic from our peers.
So Cloudflare's network is so massive that as soon as we tell someone, hey, you can get to customer X through this route, we're going to start receiving that traffic.
And we want to receive that traffic because we want to make sure that the only stuff that you're getting on your network is clean traffic.
And so we want to inspect that and drop the bad traffic, both from anomalous patterns that we're detecting using machine learning, but also from the network firewall rules that you've given us that'll tell us this is the only stuff that's acceptable to show up on my network.
So what you see here, this number one, is one particular data center. And so we have a router there.
It will bring this traffic in, but then it uses that unimog technology to fan it out to all of these servers.
And so each server, as Jen mentioned earlier, is performing every operation in every data center.
And that's really important for us to be able to scale.
All we need to do is ship more hardware out, and it runs the same software stack.
And so what's happening on these machines is as these packets are coming in, we are sampling these packets.
So locally, one out of every 100 packets you're looking at, and that's a very fine -grained sampling rate.
And that allows us to spot attacks right there using that local compute hardware.
And so because we're able to inspect this using Express Datapath, we're able to block this even before it goes to your data center.
And that's happening on each machine in the data center.
When some attack is detected by a given machine, it's then communicating with every other machine in that data center.
It's multicasting it around. And so those rules are communicated.
This is a very smart network. However, sometimes attacks are more sophisticated.
And even with that high sampling rate, or low sampling rate, depending on how you define it, we need more computational power to detect the attacks.
And so what we're doing is we're sending samples one out of 8,192 packets. We're sending this back to what we call our core data centers.
And our core data centers are these big, beefy deployments of really, really a lot of computational power.
And so we're looking at samples and we're trying to spot correlations for attacks.
And so as Michelle mentioned earlier, our system gets smarter. We have 20 million websites running through us.
If there's an attack on one that our system detects, it might look at characteristics of the TLS handshake, the HTTP request, the headers, the way that the TCP IP stack is designed on the machine.
This is a machine learning technique that the human wouldn't be able to spot.
But this is computing sort of out of band.
And if a rule is detected, an attack is detected, a rule is actually pushed out using that same configuration I showed you, configuration engine I showed you earlier.
So within five seconds of detection, pushed out and then dropped locally.
And then we're sending the traffic back to you today over GRE tunnels.
We're using a really novel technique known as Anycast GRE.
I think we invented it. It's a way for your router to say, talk to this one IP address.
And so you have to bring up, if you've ever put a system like this in place before, you only need to set up one tunnel and we'll route you using BGP to the closest Cloudflare data center for reliability reasons.
And so we'll soon be supporting what are known as PNI or private network interconnects.
These are effectively cables flung across the data center.
And so some of our customers have said, GRE is great, we can get up and running quickly, but we don't want to have any exposure to the Internet at all.
And so we'd like to route just through you. And so that's what that's going to let us do.
And so we look forward to moving customers to that in the near future.
Lastly, we've recently introduced a socket Cloudflare.
And so we'd like to detect these attacks with systems, they scale a lot better than people.
But in some cases, there's something that's either so big or so obscure that we need someone to take a look at it.
And so we have a team now 24 seven around the world that is monitoring these customers networks and they may be pushing out manual mitigation rules.
We see this as learning opportunity. Anytime an attack needs to be mitigated manually, a ticket's filed and then our DOS team will actually go in and say, how can we protect this automatically next time?
So I want to talk you through one example of a customer you may have heard of Wikipedia.
We had gotten in touch with them in September. They were the brunt of a very large attack.
And so someone was blasting them with a lot of traffic volumetrically, trying to knock them offline and actually doing a pretty successful job on it.
And so Wikipedia, as you might imagine, they have specific privacy concerns around, I don't necessarily want a third party to see the request of what someone's looking at.
It could be dangerous in different parts of the world.
So they wanted us to help protect them at layer three without actually decrypting stuff at layer seven.
And so this is a public blog post by Thousand Eyes, a company that does a lot of network monitoring.
And this is actually a really cool visualization you can go to on their website, which shows how Cloudflare inserted ourselves in front of Wikipedia's network to be able to mitigate the attacks.
And I want to talk in a second about the security performance trade off.
Because when people hear that, they often think, hmm, that's going to slow it down.
It's actually not the case, as we'll see in a second. So this is a chart showing global performance for Wikipedia.
So this is eyeball performance. Someone in a browser going to a website, loading Wikipedia.
And so you can see here on the left-hand side, this was sort of the baseline.
These spikes that you see here are the actual attack.
So when someone's sending a tremendous amount of traffic, that's going to raise the response time and it's going to slow down the site.
And so this is what we saw. This is what a third party saw and was monitoring.
And so this is where we inserted ourselves in front of Wikipedia as the bouncer for those packets coming in.
The really important takeaway from this slide is that there was no performance trade off here.
So it's negligible.
So it was 63 milliseconds kind of within the margin of error, before and 66 after.
And so we are now sitting in front, or we were at this time, sitting in front of Wikipedia's network, protecting them without actually slowing down their site.
So this is something we're really proud of. So what is coming up, or what are the properties of Magic Transit that we're really excited about with DDoS mitigation?
So as Russell mentioned, this is built in-house using our own software. We've met with customers here in New York that have said, hey, I had a problem with my appliance and I placed a call to my vendor and they said, great, we'll work on a firmware update and get back to you in a few weeks.
That's not how we like to operate.
When we find a bug, we like to fix it and we can push it out. And we've spent a lot of time reducing that release cycle.
Usman and his teams are very focused on that.
Today you can use BGP to bring your own IPs to us, your own IP prefixes.
You can actually mix and match within the prefixes, layer three, layer four, layer seven.
So you can say, I want to protect this at layer three, but I actually want to run this through layer seven on the WAF.
I want to take advantage of your application filtering.
We can get you onboarded in hours. We actually did an onboarding here last night from the hotel.
And we can return traffic to you over GRE. PNI, as I mentioned, is coming soon.
This is using, like any Cloudflare product, every machine and every data center.
So today it's 194 cities. If you're using this as our network grows, our ability to withstand attacks against you grows as well.
And in a lot of cases, today we haven't added Argo Smart Routing. We do have customers that have reported based on the provider they were coming from that had those centralized security choke points.
They're actually seeing performance boosts.
And we expect to see a lot more of this once we add Argo to the process.
As I mentioned, layer three, four, and seven together. Cool. So I want to talk to you about something that's coming in the future.
Everything you've heard today is available.
We're onboarding customers. We're getting great feedback. We're adding features to the product.
I want to talk to you about Cloud Firewall and Cloudflare Access.
So Access exists, of course, and customers are using it to replace VPNs.
But the other part of a firewall is actually dictating who can come in and out of your network at the network layer.
And so we're building controls, and I'll show you in a second what that looks like, to let you specify this with the same rule-based engine you can do for layer seven.
So here's the diagram.
This is a mock-up. This will look slightly different as it starts to roll out, but it should look quite familiar.
This is the interface you saw earlier that I took you through in the recorded demo.
And so what I want to point out here is that we've selected on the top in the selector, we've selected a specific data center.
And so if you think back to that network diagram, this could be your core data center.
And so today, you likely have Palo Alto or Cisco or FortiGate or whatever your expensive firewall is having these rules configured on them.
And so what we're doing is we're first adding this traffic direction selector.
So you can say, I want to apply the rule if stuff comes into my network or stuff goes out of my network.
And then we're giving you more controls in that same expression builder library to define things on TCP ports, UDP ports, and actually really complex stuff.
So if you've written TCP dump, for example, you've used this TCP dump libpcap kind of capture expression.
And so we're actually going to let you write that on our firewall.
So you'll actually be able to write very complex rules that maybe are matching TCP headers or byte segments within the packets.
And we're going to put this control in your hands.
Here are some charts that we're going to actually show you for that.
So anytime we build a product at Cloudflare, we're very focused on giving you the detailed analytics for that product.
And so this is a view similar to what you saw before when we're blocking application layer attacks.
This is network layer attacks.
And so we're going to start showing you things like how many bits per second is just passing through your network to help you manage your spend, your network utilization, your capacity.
But also from an attack perspective, I hear a lot from customers wanting to say, you've done a great job blocking the attack for me.
Can you give me some more detail about it? And so this is our way to automatically give you that detail.
And then anything that we don't have in the dashboard, the SOC will be able to augment.
And then we'll continue to work it into the application.
So we're really excited to get this in your hands. And we'll start talking about this soon a bit more publicly on our blog later.
This is a top end.
And so this uses the same underlying engine to show you when you've drilled into an attack, where did that attack come from?
Obviously, a lot of IPs can be spoofed.
And so we can tell you the spoofed IPs, where they appear from. But there's also different details about the source, ASN, autonomous system numbers, as well as ports, to give you the ability to then have that feedback loop into right rules.
So I think, just to kind of wrap up here, we are very focused, Rustam and I and our teams, on working to bring security and performance to your enterprise network.
And so you shouldn't have that trade off.
And we want to do it in a sane way. We want to give you the controls that you can deploy your security posture to the edge and make it easy to manage.
So we're excited to talk more about it and show you more about that soon.
So thank you. Thanks. Thanks. Thanks for having me. Thank you.
Thank you. Thank you. Thank you. Thank you. Thank you.
Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Cloudflare Access allows you to securely expose your internal applications and services, enforce user access policies, and log per-application activity, all without a VPN.
This video will show you how to enable Cloudflare Access, configure an identity provider, build access policies, and enable Access App Launch.
Before enabling Access, you need to create an account and add a domain to Cloudflare.
If you have a Cloudflare account, sign in, navigate to the Access app, and then click Enable Access.
For this demo, Cloudflare Access is already enabled, so let's move on to the next step, configuring an identity provider.
Depending on your subscription plan, Access supports integration with all major identity providers, or IDPs, that support OIDC or SAML.
To configure an IDP, click the Add button in the login methods card, then select an identity provider.
For the purposes of this demo, we're going to choose Azure AD.
Follow the provider specific setup instructions to retrieve the application ID and application secret, along with the directory ID.
Toggle support groups to on if you want to give Cloudflare Access to read specific SAML attributes about the users in your tenant of Azure AD.
Enter the required fields, then click Save. If you'd like to test the configuration after saving, click the Test button.
Cloudflare Access policies allow you to protect an entire website or resource by defining specific users or groups to deny, allow, or ignore.
For the purposes of this demo, we're going to create a policy to protect a generic internal resource, resourceonintra.net.
To set up your policy, click Create Access Policy.
Let's call this application Internal Wiki.
As you can see here, policies can apply to an entire site, a specific path, apex domain, subdomain, or all subdomains using a wildcard policy.
Session duration determines the length of time an authenticated user can access your application without having to log in again.
This can range from 30 minutes to one month.
Let's choose 24 hours. For the purposes of this demo, let's call the policy Just Me.
You can choose to allow, deny, bypass, or choose non-identity. Non -identity policies enforce authentication flows that don't require an identity provider IDP login, such as service tokens.
You can choose to include users by an email address, emails ending in a certain domain, access groups, which are policies defined within the access app in the Cloudflare dashboard, IP ranges, so you can lock down a resource to a specific location or whitelist a location, or your existing Azure groups.
Large businesses with complex Azure groupings tend to choose this option.
For this demo, let's use an email address. After finalizing the policy parameters, click Save.
To test this policy, let's open an incognito window and navigate to the resource, resource on intra.net.
Cloudflare has inserted a login screen that forces me to authenticate.
Let's choose Azure AD, log in with the Microsoft username and password, and click Sign In.
After a successful authentication, I'm directed to the resource.
This process works well for an individual resource or application, but what if you have a large number of resources or applications?
That's where Access App Launch comes in handy.
Access App Launch serves as a single dashboard for your users to view and launch their allowed applications.
Our test domain already has Access App Launch enabled, but to enable this feature, click the Create App Launch Portal button, which usually shows here.
In the Edit Access App Launch dialogue that appears, select a rule type from the Include drop-down list.
You have the option to include the same types of users or groups that you do when creating policies.
You also have the option to exclude or require certain users or groups by clicking these buttons.
After configuring your rule, click Save.
After saving the policy, users can access the App Launch portal at the URL listed on the Access App Launch card.
If you or your users navigate to that portal and authenticate, you'll see every application that you or your user is allowed to view based on the Cloudflare Access policies you've configured.
Now you're ready to get started with Cloudflare Access.
In this demo, you've seen how to configure an identity provider, build access policies, and enable Access App Launch.
To learn more about how Cloudflare can help you protect your users and network, visit teams .Cloudflare.com.
Optimizely is the world's leading experimentation platform.
Our customers come to Optimizely, quite frankly, to grow their business.
They are able to test all of their assumptions and make more decisions based on insights and data.
We serve some of the largest enterprises in the world, and those enterprises have quite high standards for the scalability and performance of the products that Optimizely is bringing into their organization.
We have a JavaScript snippet that goes on customers' websites that executes all the experiments that they have configured, all the changes that they configured for any of the experiments.
That JavaScript takes time to download, to parse, and also to execute, and so customers have become increasingly performance conscious.
The reason we partnered with Cloudflare is to improve the performance aspects of some of our core experimentation products.
We needed a way to push this type of decision making and computation out to the edge, and workers ultimately surfaced as the no -brainer tool of choice there.
Once we started using workers, it was really fast to get up to speed.
It was like, oh, I can just go into this playground and write JavaScript, which I totally know how to do, and then it just works.
So that was pretty cool.
Our customers will be able to run 10x, 100x the number of experiments, and from our perspective, that ultimately means they'll get more value out of it, and the business impact for our bottom line and our top line will also start to mirror that as well.
Workers has allowed us to accelerate our product velocity around performance innovation, which I'm very excited about, but that's just the beginning.
There's a lot that Cloudflare is doing from a technology perspective that we're really excited to partner on so that we can bring our innovation to market faster.
There's too much that goes into creating high-quality video today that's just simply still too hard for many of our customers.
Most cloud providers don't actually provide a turnkey solution for video.
They provide bits and pieces of the equation, but there's no provider that provides an end-to-end solution from rendering to streaming.
They'll provide bits and pieces that now you have to kind of cobble together to build an amazing product.
Our focus now is how do we simplify and streamline that by providing a deeply integrated, simple, and easy-to -use solution.
A big part of what we do at Cloudflare is, as we focus on helping build a better Internet, is take complicated things and make them simple, and to enable them to just literally be able to go to Cloudflare, to log in, to point their video asset at Cloudflare, and then on the other end be able to pull a player out of Cloudflare and place it wherever they need to be able to deliver the video, and that's it.
There's a triplicate where you could do something either well or fast or cheaply, and so we're striving for all three because we really need it.
We need it to be really good because otherwise why would anyone use the service?
You got an entire Internet out there, use something else.
We need it to be fast because people have no patience, and we need it to be cheap enough that we can stream to millions of users without it becoming uneconomical.
So you have to get all three, and Cloudflare is a really important part of offering all three.
If you want to deliver a video to anybody on the globe, there really is no better network to put it on than Cloudflare because we can guarantee the highest quality experience to somebody who is in New York City, and someone who's in Djibouti, and someone who's in Sydney.
A botnet is a network of devices that are infected by malicious software programs called bots.
A botnet is controlled by an attacker known as a bot herder. Bots are made up of thousands or millions of infected devices.
These bots send spam, steal data, fraudulently click on ads, and engineer ransomware and DDoS attacks.
There are three primary ways to take down a botnet by disabling its control centers, running antivirus software, or flashing firmware on individual devices.
Users can protect devices from becoming part of a botnet by creating secure passwords, periodically wiping and restoring systems, and establishing good ingress and egress filtering practices.
So