1️⃣ A Stronger Bridge to Zero Trust
Presented by: Annika Garbers
Originally aired on June 23, 2022 @ 2:00 PM - 2:30 PM EDT
Join our product and engineering teams as they discuss what products have shipped today during Cloudflare One Week!
This segment features Cloudflare Product Manager Annika Garbers discussing how new enhancements to the Cloudflare One platform make a transition from a legacy architecture to the Zero Trust network of the future easier than ever.
Read the blog post:
Visit the Cloudflare One Week Hub for every announcement and CFTV episode — check back all week for more!
English
Transcript (Beta)
Hello everyone. Welcome to Cloudflare TV. My name is Annika.
I'm on the product team here at Cloudflare and it is Cloudflare One Week, which has got me so excited.
Cloudflare One Week is one of our signature innovation weeks where we announce lots of new products, partnerships and initiatives around a specific theme and all focused on how we are helping build a better internet.
This week the focus is on how we are bringing networking and security together with Cloudflare One, which is our unified SASE, secure access service edge, platform that helps customers get to a Zero Trust architecture regardless of where you are starting or what your network looks like today.
So we have a full week of announcements, new resources that you can use to help plan out this journey with your team and get from where you are now to this promise of Zero Trust and this vision of secure access service edge in the future.
And one of the blogs that we published today talks about a handful of enhancements to the Cloudflare One platform that makes this transition easier, especially for companies who are starting in a place with relatively legacy network architecture and trying to figure out Where do you start?
What's the first step that you can take?
And so today what I'll walk through is what this transition can look like and how Cloudflare One can help you achieve it.
But before we jump into Zero Trust and talk about the future and what we're moving toward, let's take a step back and review where we've been.
And I want to do this by whiteboarding out a couple of different network architectures that we have heard about from our customers, that we've talked to them about what this transformation has looked like over time.
So I want to start here with this network architecture that's referred to as Castle-and-moat, or you might have heard of this as the perimeter security model.
This is really classically how people thought about building corporate enterprise network architecture.
And what it would look like is, we'd start with, I'm going to put in this middle here two...
one or two. Let's let's do two in this case.
We'll talk about sort of a larger organization, central datacenters, where most of the applications that employees would need to access and maybe also external users, if you are a company that has resources sort of facing the Internet that your customers need to get to, the applications would live here.
So we're going to have Data Center 1.
And here, right next to it, I'll put Data Center 2.
Some companies maybe would only need one of these data centers, but in this case, I'll put this back up one.
So we'll say that this is sort of the primary that this organization uses to serve their applications.
Then they have this backup one that they can fail over to in case there's an issue.
And in the case of this organization, again, this is sort of a made up example, but really made up of anonymized reference architectures that we've seen from lots and lots of customers who've described how they're going through this network transformation journey.
So let's say that they have these data centers on the same physical campus as their headquarter locations.
So if I put...
we've got Data Center 1 and 2. Down here, we'll put headquarters. Yeah, let's say HQ1 and HQ2.
And the really cool and helpful thing about having the data centers and the headquarters on the same corporate LAN, the same corporate local area network, and I'll draw a box around these to kind of signify that back here, is that the traffic from all the users that are sitting in that headquarters can go directly to the data center on that local area network without having to leave the building.
Really. Or maybe it leaves one building and goes to another building on the same campus, but it's really close by so we can consider these entities connected on the organization's local area network.
We'll put LAN for Local Area Network here to kind of signify that.
LAN and a LAN. Ok, so this is great news.
We have people sitting in the headquarters that can access their applications and each of these places.
What happens if we need to do that sort of failover operation that I was talking about, where these datacenters need to connect to each other?
That's where forms of private connectivity would come in.
So whoever is setting up this network architecture would call up their telecom provider and say, Hey, can I get some MPLS?
An MPLS line, multi- protocol label switching, which is a form of private connectivity or maybe dark fiber or a point to point line.
There's a bunch of different ways that you can procure this private connectivity to get these locations connected to each other.
But now this is great.
We have people sitting in this headquarters. They can get to these applications.
If they need to failover, they have a way to get to access applications for hosting the other data center as well so everybody can talk to each other.
And then maybe there's additional locations that this organization has that are kind of outside of this local area network of just these two headquarters here.
Maybe we have...I'll represent these maybe as circles because I don't want to label each one of them individually, but maybe we've got branch offices.
I'll just draw a couple of them.
We can use these as reference, these little branches.
And then maybe in this case we were talking about a retail company or a manufacturing company.
And so they have either some manufacturing spaces or some retail stores, you can use your imagination or map this to what your organization's network looks like, what are those other locations.
So, if each of these sort of more remote locations needs to get access to resources that are hosted in the data center again, what are you going to do?
You're going to call up your telecom provider and buy some MPLS.
So you've got MPLS lines.
Now they're connecting each of these guys.
Do a little...
Some private lines...
Great. And then so now we have sort of our corporate LAN established here.
And then the last two things that I want to draw on this kind of traditional castle-and-moat architecture are the reason for the moat existing.
So, if we've got this moat, what's it around?
In this case, we've got pretty much a group of trusted people. It's either users or device traffic from our branch offices and our retail locations, or in our headquarters.
They've got those applications in the data center.
Everything right now is within the sort of castle of this architecture.
It's all trusted, it's all protected.
And so the place where security teams would start to think about this or get involved is what's the gap in the moat?
Where is the sort of drawbridge that reaches out to something or allows traffic to go in and out of the corporate network?
That's the Internet.
We're going to draw the Internet up here.
And in this traditional architecture, the way that access to the Internet works is there's basically a breakout from each of these data centers, where there's a stack of hardware devices, things like firewalls, intrusion detection systems and other security appliances that are filtering this, enabling this access.
So this is really secure.
Security teams are happy about this architecture because you can have...
I'll put some, we'll do...
This is going to be a firewall.
You've got like a set of firewall appliances here, and maybe here, that can filter all of this traffic.
And so there's no way for bad guys from the Internet to get to any of these other locations or from any of these other locations to get directly access out to the Internet and maybe access bad things out there and things like malware or other security threats that could compromise these devices that you at these remote locations.
Basically everything flows through this center place where you are managing all of this connectivity and the security from the stack in your moat, or at your drawbridge.
So now we've got this architecture and this worked for a while.
Many corporate LANs still look very similar to this and access to the Internet is kind of broken out through one central place and branch office and retail locations are connected via this private connectivity.
But a lot of things have changed since this architecture emerged and people sort of designing their networks this way.
So let's talk about that.
What's different now? Well, the first thing that happened is that this Internet bubble's got a lot bigger.
The percentage of applications... we'll call this sort of Generation 2. The number of things that people need to access that are on the Internet are now huge.
A bunch of these applications that used to be in these data centers have probably moved to the cloud.
And so the size of traffic flows that need to go to self-hosted applications versus net hosted applications, things like SAS has really changed and really increased.
And so applications have have kind of left the data center and now there's many more of these flows out here, which turns these connections here, this breakout to the Internet into now a bottleneck.
Because we are talking about...
I'll make this bigger. This is a bottleneck now because there's more and more of these flows and so what used to be just maybe a small amount of traffic exchanging here is now a lot.
And then also the other thing that happened, so applications left this our castle environment, users also left our castle environment.
And that was really accelerated with the global pandemic that started over two years ago now.
We really realized people can, in many industries, work from anywhere, whether that's at their home office or coffee shop or somewhere that's not on their corporate LAN.
So now we've got a whole bunch of users.
We're running out of icons here. What are we going to use? I'll do them as triangles.
Now we've got all these remote users, okay, that are all hanging out, that still need access to applications in the data centers, but also need secure access out to the Internet.
The way that most companies have solved this problem is, or at least traditionally is, by providing a VPN.
So these remote users will install a VPN client on their machines, whether that's a laptop it could be a phone.
And then they take a path over this virtual private network to a data center.
And then from there their traffic can access these applications, but it's also filtered out to the Internet.
But remember this bottleneck that we talked about, this isn't just impacting traffic from these headquarters and these other locations that need to get out to the Internet.
It also now is impacting all of these remote users. And so people working from home can have potentially a really miserable experience if they're far away from wherever that VPN concentrator is.
And so we don't have enough space even on this, on this small whiteboard to kind of draw out a realistic explanation of how fragmented all of these traffic flows have gotten.
But you get the general idea, right? We used to have everything in this one castle and now traffic can come from anywhere, go to anywhere out on the Internet.
Everything is really fragmented.
And so this model of having all of your security enforced in one place on your, on your corporate LAN doesn't make sense anymore.
The other thing that's emerged here is, is just more sophistication and different kinds of of attacks and reconnaissance and things like that that bad actors are able to do.
And because all of these traffic flows are so fragmented and you've got a whole host of applications you need to figure out how to protect, if someone compromises one of those applications, if they figure out one way to get into your network, then they can get access in this traditional model or this VPN model, where once you're on the network, then you can get to anything inside of here.
This causes a big problem because if this attacker compromises one part of your network, then they can move laterally and get to other resources they really should not have access to and maybe do a lot more damage than you would expect.
So we have this kind of castle-and-moat where we started. We have this sort of smorgasbord of fragmented connectivity flows, and there have been solutions that have emerged to kind of patch these problems.
There's really point security solutions that either virtualized functions that used to live in these hardware boxes, so maybe you have something like a virtualized firewall where you're taking that component and hosting it in the cloud somewhere.
But the issue with this approach is that you now still need to figure out how to get the traffic flows from all of these different locations through that piece of virtualized hardware.
It's the same software that used to run on your data center, running on somebody else's box.
And so now you've got to get your traffic flows from here through here, and then maybe not everything's moved out here.
So you need to do kind of a loopback or sometimes we call these like hairpins or trombones in the networking world, basically traffic bouncing around a whole bunch to be able to be filtered in all of these ways.
Or maybe your organization can take that performance hit and you've just said, okay, we're going to kind of take a trade off with the security of these traffic flows because we need people to get reliable, consistent performant access to the things they need to do their work, which is not a spot that you want to be in as a security team.
So a few years ago, this this framework of Zero Trust emerged to solve that problem that we were talking about earlier of helping permit lateral movement on a network.
The idea that instead of once someone is on your network, they can get access to everything, instead making sure that every request into and out of your network is authenticated and allows access to only the specific resources that it needs, not the entire network in general.
So that's sort of a framework of a security approach that we want to take to solve that problem.
And then more recently, Gartner coined the idea of secure access service edge or SASE, which is delivering that framework, helping you achieve Zero Trust by taking these functions that used to live in a stack of hardware boxes in a central datacenter and actually delivering them in the cloud on the Internet as a really cloud-native service as opposed to a virtualized service.
So the idea is put those security functions at a global edge, locations all over the world that are super close to your users and your applications and then control them still from one, one control plane.
So you don't have to log into a lot of different virtual boxes to make configuration changes.
So in contrast to what those previous architectures looked like, this new approach would be something more like this.
I'm going to draw this global edge, just sort of like a little, kind of like an abstract cloud here, because the edge is actually sort of really hard to visualize.
But you can think of this as just a whole bunch of different locations that are all interconnected together, right?
So maybe I have one point of presence or colo or data center or whatever you want to call it over here in, right now I'm in Dallas.
Maybe you have another one in, based in Atlanta.
We'll do some non-U.S.
cities. Maybe you've got one in London.
Maybe over here there's one in Singapore.
We'll do...
Paris Etc..
So you can imagine there's all of these locations that kind of make up this service edge, are sort of spread around the world, close to your users, wherever they are, and then also close to your applications, wherever they are.
Because it's not just delivering the connectivity close to your laptop, it's also being able to optimize the middle mile path so you can get the traffic from where it lands to where it needs to go really efficiently.
So you've got this service edge now and this actually now can deliver all of those functions that used to exist in your data center.
So I represented this again really abstractly with this kind of like one firewall box.
In reality, for a given data center, corporate network architecture, this would probably be a really big stack of things.
You've got a whole rack of different functions...
WAN optimizers, firewalls, intrusion detection systems, VPN concentrators, all of this.
Instead of those boxes living on your physical network and hardware, they're all delivered from this service edge.
Network functions.
And critically about this approach, any of these nodes that make up this network edge, Dallas, Atlanta, London, everywhere that we've drawn out here is able to do this full stack of network functions.
So it's not that you go to London if you need a firewall and you go to Atlanta if you need a VPN.
Wherever your users connect to, they're going to be able to have their traffic filtered through that entire stack right there.
And so now you've got this service edge.
It's delivering all of your network functions.
Now what you need to figure out how to get your traffic to it.
So if we redraw kind of those locations that we started out with, your data center, Put the data center down here.
Data center. We've got headquarters, HQ...
Now maybe we have some cloud properties.
Like maybe you have put some of those applications that used to live in the data center...
you got AWS, got some GCP, you got Azure. More and more organizations that we talk to are really moving forward with implementing a multi-cloud or poly-cloud strategy so that you can get the best of breed features from each of those individual cloud providers.
And then we've got we've got our friends over here.
There's these retail locations, we've got our branch offices, and we've got our remote users.
All of these different entities here...
I'll just put those names on here again. We can kind of use those interchangeably, right?
It might be a little bit of a different network stack at a store versus an office.
And then you got users.
All of these different entities then connect to the service edge at the closest location to them.
And ideally this, there's no configuration required to you to actually figure out which of those locations are the closest.
The ideal version of this service edge or this kind of yeah, this network that's delivering these services is that it's anycast.
So that decision actually happens automatically.
So if you've got a data center that is in Paris, you'll connect to the Paris location automatically.
Well, just thinking about this service edge as kind of one network, same deal with the headquarters, same deal with their cloud properties.
So there's lots of different ways to get all of the different entities that you have on your both original corporate network and the new things that you're adding, like cloud functions and remote users.
Lots of different ways to get that traffic to this edge network that could look sort of most traditionally like a direct connection.
For example, if your data center is co-located or located close by to this sort of service edge or network that's delivering these functions, you can just interconnect or drag a cable directly between your networks to get reliable, dedicated access.
It could also look like some kind of tunnel mechanism over the Internet.
So maybe a GRE or IPsec tunnel that you could set up between your location and this network edge.
It could look like an application-level tunnel. So maybe you want to do sort of the most Zero Trust thing here and access only specific applications.
And in order to do that, you could install a piece of software on your application server or on a jump post so that you can access different parts of your network from right there.
And then for remote users, that could look like a device client.
So a little piece of software that you would install on the client, I have one running on my laptop right here, or that could be a phone as well, that acts essentially as a forward proxy to get traffic from that location, wherever your user is sitting, to the service edge.
And again, the key detail here is anycast, right? This should not ever have you have to think about where these locations are, where you have to connect to.
You just connect to the network that's delivering these functions in this cloud- native way and then the network takes it from there, both by applying those security policies really close to wherever you connect and also sending the traffic to its destination.
And that could be out on the Internet or it could be somewhere else on your corporate WAN, because once you have one of these locations connected, any new location that you add has the ability or can be granted the ability, along with your secure policies, in order to make sure it's allowed to do that, to communicate with anyone else that's on your network here.
And so really conceptually, this looks a lot like this sort of hub architecture that we started with.
Another sort of way to think about this is hub and spoke, where you have a central location and then traffic kind of funneled through it to get out to the Internet or other places.
It looks really similar, except that the hub is this distributed global network.
The hub is everywhere and those security functions are applied everywhere but controlled from just one place, which is really, really powerful and exciting.
So we've gone from this traditional architecture to this sort of intermediary step with some of these problems that are kind of getting patched with solutions like virtualized functions to this really fundamentally new and exciting secure access service edge architecture that delivers these functions in a different way.
And by doing this, your organization can really, truly achieve Zero Trust, because any request from any of these connected networks can go through that full stack of security policies and get authenticated and authorized based on just that single request, not the entire network access model of traditional VPNs.
And so if this is where we've been going and where we're going to, how do you get from A to B?
How is it practical to actually make that step? Well, it's not going to happen overnight.
It's not realistic to sort of snap your fingers and get from picture one picture two immediately, but what you can do is start with a single pain point and then build from there.
So get a quick win. Start building momentum in your organization around this transition to Zero Trust and this new networking model and then kind of kind of knock down additional wins from there.
And so for some organizations, what that looks like is starting with Zero Trust access to applications.
That could be maybe contractor access to apps so that you don't need to have third party entities install your VPN client on their machines, or maybe they can't and so this is an alternative.
And then from there, you can expand into remote user access, and then from their office user access, and then eventually no one is on the VPN anymore.
This is a path to replacing that.
And you can either start with one application and give it access through a Zero Trust approach rather than the VPN and sort of do it one application at a time and eventually get everything off of your old VPN.
Or you can start with a setup that looks kind of similar to the VPN that you have today, where you install a device client, you've got an IPSec or GRE tunnel to connect you to the network and then you have the security policies in between, but you can get the performance improvements and other things that that service edge affords you and then again transition those applications to the Zero Trust model.
So that's one way to start. For other organizations, that looks like starting with focusing on filtering traffic that's outbound to the Internet.
So deprecating hardware firewalls that maybe perform this function at your previous branches or locations and then moving to this cloud-delivered Secure Web Gateway approach instead where you can apply Internet-bound traffic filtering policies for user devices or any of those locations that we talked about from one place, same policy control, where regardless of where that traffic comes from.
And then a third approach that many organizations start with as well is with the WAN.
If this initiative is coming from the network side of your team, you can transition to using WAN as a service, commodity internet circuits to move away from MPLS without sacrificing the performance or security that your organization needs.
We have a step by step guide that is available as of this week.
We're super excited about it at zerotrustroadmap.org. And if your team is kind of looking around for what's a good place for us to start, this can be a really helpful tool for you to do that.
You can go through here and see kind of the major components involved and these broken down and mapped to what that looks like for your team and then a reference architecture for what that will look like.
There's ideas in here around the level of effort required for each one of these individual steps so you can get a sense of where it makes sense to start for your team.
We really break it down step by step and then also give you an example of an implementation timeline and some relevant products that can be used to address each one of these requirements.
And this isn't just Cloudflare.
We've taken a vendor-agnostic approach to this so that you can get a sense of what tools are available for you to kind of prove out and validate and see what would fit in your environment or what you might already have today that could help you solve some of these problems without having to introduce new tools.
So we've talked about so far these architecture models, sort of the gen one, two, three.
We talked a little bit about how you would make that transition from one to the next.
And finally, I kind of want to wrap up with a plug for why we are excited specifically about how Cloudflare can help.
I'm super jazzed to be in the product organization in Cloudflare because I think we're really well-positioned or the best position actually to help organizations make this transition for real.
We have the most comprehensive SASE platform by putting all of these pieces together in software that we've actually built from the ground up.
One of our favorite words to use to describe this concept is composable, which is this concept of everything all works together.
You can connect any of those locations that we talked about to our network in really similar methods.
Once it's there at our network, you can run through the same stack of security policies regardless of how it got there.
And then you can start with one piece or solving one pain point and then add on to it over time.
It doesn't have to be an overnight transition. And so unlike our competitors who have either acquired or partnered or maybe if they built software, they built it on totally separate networks, so they have some locations that run one piece of this and some locations that run the other, we have every service that we offer delivered in software that we wrote on every server throughout our entire network, which is now over 275 cities across the globe.
And so what this means is that if you if you want to start with technologies that look similar to what you have today, kind of that more VPN-looking approach, but then get those wins with additional security layered on top, better performance for that traffic, and then you can transition to a really, truly Zero Trust architecture with minimal work.
It's not undoing everything that you have already done and redoing new stuff.
It's incremental steps that you can really build on top of to move your organization toward this new vision with kind of a clear path to do that in the process.
A specific example of this that I'm excited about that we actually announced today is interoperability between Magic WAN, which is our network level connectivity suite and Cloudflare Tunnels, which allow you to get that application-level Zero Trust connectivity where you would install a lightweight daemon to establish a tunnel to Cloudflare's network.
And these are now fully interoperable.
There's a matrix in the blog post that we published from today that breaks down all of these sort of ways to get traffic to and from Cloudflare's network and what policies and things like that can apply on top.
And so what you can do is start by connecting an entire network with an IPSec tunnel, maybe similar to what you're doing today with a legacy VPN, and then over time, add additional access to applications using Cloudflare Tunnel and those routes and everything that's required to get your traffic from point A to point B, actually work together, work totally seamlessly.
So you have a place to start, you have steps to get to Zero Trust and then maybe one day you deprecate that IPSec tunnel or don't need that network-level connectivity anymore because you're controlling everything at the granular application level, which is kind of the ideal best practice way that we want to help you on.
So we're super excited about the Cloudflare One platform. Super excited about Cloudflare One Week.
Please stay tuned.
We're not anywhere close to done. More announcements, more exciting things coming out on the blog all through the rest of this week and beyond.
And please get in touch with us if you are curious about how we can help your organization specifically go through this transformation to Zero Trust.
Helping customers solve these problems is my favorite part of my job and would love to chat with you more about how we can make that happen for you.
Have a great rest of your day.
Thanks for watching Cloudflare TV.