Originally aired on March 3 @ 12:30 PM - 1:00 PM EST
Cloudflare's SASE platform can replace your traditional, expensive VPN appliances which deliver poor performance for users and create more security risks than solve them. Cloudflare's Zero Trust Network Access (ZTNA) service is a more secure, highly scalable cloud solution and in this video we look at how easily you can deploy Cloudflare to secure access to internal resources.
Chapters:
00:00 Introduction to Corporate Network Security and Access Challenges
01:15 Cloudflare’s SASE Approach to Securing Internal Applications
01:54 Connecting Internal Applications to Cloudflare with Secure Tunnels
02:42 Implementing Identity-Based, Clientless Access Control Access Control
05:36 Leveraging Anycast Networking for Faster and More Secure Application Access
06:59 Enhancing Security with Micro-Segmentation and Cloudflare’s Global Network
Watch the rest of the videos in our series to learn more about Cloudflare's SASE platform.
And if you want one of our experts to do a deep dive workshop into how you can integrate Cloudflare into your existing environment, contact us: https://www.cloudflare.com/zero-trust/
English
Cloudflare
anycast
cybersecurity
identitymanagement
microsegmentation
networksecurity
sase
secureaccess
zerotrust
ztna
Transcript (Beta)
Corporate networks are often used to allow employees to access sensitive information in private self-hosted applications, such as an internal wiki, a HR system, or a source code repository.
While some applications have migrated into the cloud as SaaS apps, there are still applications that are run and maintained by IT.
These days, most of these self-hosted applications run on a web server and are deployed either in a private data center or in a public cloud such as AWS, Azure, or Google.
Access to these applications is usually limited to internal employees, but it's common to allow some form of restricted access to partners or contractors.
The old way of doing things was to have users either come into a physical office or connect remotely via VPN, giving them access to that corporate network so they could access the application.
But these VPN solutions use on-premises hardware appliances through which every user request passes, creating a bottleneck and a security risk.
In fact, recently many on-premises VPN vendors such as Cisco, Checkpoint, and Fortinet have reported a wide range of vulnerabilities, which requires IT and security teams to scramble to update their systems.
But there's another way to do this.
Did you know that Cloudflare can be used to easily create secure access to these self-hosted applications using our SaaSy platform that's part of our connectivity cloud?
Well, similar to how a legacy VPN works, but using a much, much more modern cloud approach, let's take a look at how we improve on the old way of doing things and create greater security for application access.
The first objective is to create connectivity between the user's browser and the application, right?
So there are two parts to this, the connection from Cloudflare to the app and the connection between the user and Cloudflare.
Cloudflare is going to sit in the middle and apply security policies and use its vast network to protect the application and improve response times.
For the first part, to create connectivity from Cloudflare to the app, we use tunnels.
There are a variety of different methods you can use.
You can connect on-premises networks to Cloudflare via IPsec or GRE tunnels, typically using your existing network hardware.
Or if your applications are running at a data center where Cloudflare already has its servers, we can connect directly from your servers to our servers inside that data center.
But for this example, we're going to talk about using a software agent.
It's just a small daemon that is installed either directly on the application server or runs on a dedicated server on the same local network.
The software then creates a secure tunnel back to Cloudflare. This tunnel maintains a constant connection to two Cloudflare data centers, so it's always available and now your traffic can flow between your Cloudflare network and the application.
Now for the second part, we need to connect your users to Cloudflare, which we're going to do in this example using public DNS.
We'll associate a public hostname with the application.
Requests to this hostname will resolve to Cloudflare, which in turn proxies and routes traffic down the tunnel to the application.
But hold on a second, a public DNS record and a tunnel directly to the server?
If we didn't take this any further, we could now access this internal application from anywhere just by heading to the new public hostname.
What we need to do is add authentication and authorization into the mix.
Because Cloudflare is now in front of access to the application, we can integrate with your existing company identity providers.
So anyone attempting to access is first redirected to your identity provider to authenticate.
Now this is where it gets interesting because you can add multiple identity providers in front of the same application.
So for example, you might use your main company directory where all the employer accounts reside, but you might also integrate a separate identity service just for contractors, partners, and other third party users.
We also support consumer identity providers such as Facebook, Google, or GitHub.
In fact, any SAML or OAuth identity service can be used.
Now, these identity integrations don't just provide authentication, it's possible to import user and group information, which brings us onto the next step.
Now that we've ensured a user has authenticated, we can start to leverage their identity information and other data to create an access policy that defines who should and should not get access to the application.
Let's build a policy here in real time. A range of different attributes can be used that define who is allowed or denied access.
We started by adding an identity provider, so users first need to authenticate.
Let's take it a little further and leverage group information from the same identity service.
We can say only users in the full -time employees group have access to the internal wiki.
The identity service can also tell us how they authenticated. So let's add that to our policy.
The requirement that they must have authenticated using MFA or multi-factor authentication.
In fact, let's say that they have to specifically use a hard token, such as a FIDO certified key.
Finally, we only want users working from Canada, the US, or Germany to access the application.
So let's add to the policy that only traffic coming from IP addresses geolocated in those countries is allowed.
Now, full-time employees working from somewhere in Canada who have authenticated using a strong set of credentials will be able to access the company wiki.
Let's take a look at this in action. The user just needs to navigate to the public host name, authenticate, and bingo, they have access from anywhere in the world with only a browser to our privately hosted application.
Simple. What a difference from the old way of doing things. Also, it's important to note that all traffic from browser to application is secured using standard TLS and SSL encryption, keeping the application data safe.
Let's turn our attention a little bit to performance.
We already mentioned that Cloudflare is a lot more efficient than a traditional VPN.
Let's think about somebody in Germany trying to access the wiki.
Cloudflare uses something called anycast networking, which means that a request to the host name resolved the nearest Cloudflare data center, of which we have many in over 300 cities.
We also have 12,000 network peering relationships, allowing us to ensure fast connectivity from user to application.
So our user in Germany might on-ramp to Cloudflare at a data center in Berlin, whereas our Canadian user might on-ramp in Vancouver.
And their requests are authenticated and the policy evaluated all close to the end user.
And if authorized, their request is then routed via the most efficient network path to the Cloudflare data center that is then nearest the application.
To further improve performance, there are many more things we can do.
Any of Cloudflare's existing performance services and network benefits apply to your application traffic.
So for example, we can leverage Cloudflare's caching technologies so that any static data from the wiki, such as images, files, videos, is all cached locally at the data center that the user is accessing, something your old VPN could never do.
Setting up access like this can be typically done in less than an hour. And it doesn't take long to migrate an entire company's internal application infrastructure.
Unlike your VPN, access to each application only exposes that specific service.
You don't need to worry about firewalling off SSH and RDP because Cloudflare is only allowing access to the specific application over HTTPS.
This is called network micro-segmentation and really reduces concerns of access gained by lateral movement.
Changes to authentication policies can easily be made in our dashboard and in just a matter of seconds the entire global network is updated.
Well, thanks for watching. This video is part of a series which explains how to build your new corporate network using Cloudflare's SASE platform.
Watch the other videos in this series to learn more.
Hi, I'm Simon from Cloudflare. Congrats on finding this video.
We also cover a wide variety of topics including application security, corporate networking, and all the developer content the Internet can hold.
Follow us online and thanks for watching.