1️⃣ Fireside Chat with Andrew Meinert
In this Cloudflare TV Cloudflare One Week segment, Cloudflare Product Manager, Kenny Johnson will host a fireside chat with Andrew Meinert, System Operations Manager at HubSpot.
Visit the Cloudflare One Week Hub for every announcement and CFTV episode — check back all week for more!
All right. Good morning, good afternoon, good evening, depending on where you're joining from the world.
Welcome to Cloudflare One, Cloudflare TV and more importantly, welcome to Cloudflare One Week.
I am joined by Andrew Meinert from HubSpot today and we're really excited to talk about how HubSpot leverages one of our key tools within Cloudflare One, Cloudflare Access.
So Andrew, if you want to introduce yourself quickly and what you - do at HubSpot.
- Sure. I'm Andrew Meinert.
I manage the system operations team at HubSpot, which is responsible for a lot of our IT server and application infrastructure as well as a bunch of critical third-party SAS services and some overlap in the security sphere as well.
Well, yeah. And thank you, thank you for joining.
And just generally good to see you. You guys have been a great partner in terms of getting to work together on new features and things to add to Access.
I've really enjoyed getting to work with the HubSpot team.
So, always fun to have you join.
So I wonder, just to kick off, if you could just give a little background on how you guys are using Cloudflare kind of and then how you came into using Access.
So our product team has been using Cloudflare for a while for CDN solutions. And then prior to using Access, we had used another Zero Trust product to kind of sit as a proxy in front of our corporate applications.
And we actually worked with Cloudflare for about a year before we jumped over to the Access product to iterate on some, some features and functionality together.
And yeah, it really, really came together and we made the transition last year.
Yeah, it was definitely a good year-long process, getting to work with you guys while you evaluated the tool and the product definitely got better for it.
I remember some of the specific requests really making it easy from a DevOps and deployment perspective I think were some of the biggest things that we were able to take and learn from your team over the year.
One thing I'd be curious as well is kind of how you guys think more broadly around...
Zero Trust is obviously kind of a buzzwordy thing in security.
But how you guys as a security team and as an application operations team think about security and just more broadly Zero Trust policies and things like that.
So, moving away from the model of trusting a particular set of credentials or the location where somebody is in, sitting in the corporate office, to a model of continuously validating that device in strong ways.
So, in thinking about authentication mechanisms and not being phishable or spoofable, so moving away from the old school kind of push notifications or TOTP to more replay-resistant authentication mechanisms like FIDO2, getting the concept of knowing whether a device can be...
is known to us or is brand new to us, like having some concept of device trust and as well as the posture of that device.
Is it healthy or does it have a strong security stack on it?
Doing that kind of continuous validation to make sure that we limit our possible exposure to compromise in the greatest possible way.
I think the device posture piece and just device information more broadly has been a really popular topic over the last year or two years.
It seems like it's been kind of the first wave of everybody going remote and COVID was just how do we enable remote access, right?
How do we just make sure things are secure?
And then there was kind of this secondary wave of like, oh, geez, people could be logging in from their parents computer, their personal laptops, their personal phones.
It's a lot tougher when you can't look over someone's shoulder and see -what they're accessing from, right?
- Exactly. Right?
And really building up a practice around the intelligence and the data around all of the logins and whether someone is logging it from a new location, whether that computer is, has previously been seen or not.
We were in a fortunate position pre-pandemic that we're a global company, but the remote office was our third-largest office globally, so we were used to managing things in a remote environment, but certainly the pandemic flipped things on its head and remote is a huge component.
And being able to get that data from our security stack in a totally distributed world has been an evolving challenge.
And obviously, I know, you mentioned the pandemic and kind of the push to remote work.
Were there other factors that kind of, or aha moments that your team realized on of, oh, we need to make a change or we really need to address how we're thinking about application access.
You know, it seems like in the application space, in just the security space in general with, with OSs and apps and that, the rapid-fire nature of security vulnerabilities is just coming and coming.
So in addition to having a strong vulnerability management process, we also need to have defense in-depth and Access, by virtue of putting authentication in front of all of our applications before anyone can even communicate directly with our apps, really offers a strong protection.
I mean, just the other week there was a critical vulnerability in Confluence.
And while Cloudflare, once that was disclosed, Cloudflare quickly came up with WAF rules to detect it and block it, but as we were doing our due diligence, we saw that the vulnerability, we actually had attempted exploits against it.
But because that application was behind Access, the bad actor was simply redirected for authentication and never actually got through.
So yeah, thinking about defense in-depth is really key.
And I think you're right. Those are so crucial as different supply chain vulnerabilities come out.
I think we saw a similar thing with SolarWinds.
That was almost two years ago, right?
Very similar thing where if you don't have something in front of just a basic username and password authentication, a lot of these self-hosted applications are susceptible for vulnerabilities that you don't necessarily know even exist, and then you're in a race to patch them and did something occur before you patched them in time?
It really can be a nightmare scenario, right?
And I mean, even thinking about it from not even just an external perspective, but really taking the Zero Trust model to heart and limiting the access from the internal network directly to the application servers and having that application traffic actually transit through Cloudflare Access.
It gives you that authentication without having to trust every device that's on your network all the time.
Yeah, and that's another really kind of key point, too, right, is like creating individual pipes for each individual application as opposed to kind of the more traditional model of cool, I'm on the VPN and if I know what I'm doing, I can start jumping IP addresses and seeing what's on those IP addresses, right?
Yeah, it's definitely something that we're seeing more and more and it's becoming more the norm.
And it's a little bit of effort to get there, but I think it definitely starts to pay dividends.
One piece I'd be curious of as well, I know you guys had some...
you guys had a pretty, I think, robust approach – obviously, we spent a year talking together – a pretty robust approach in terms of thinking about how to do an implementation right and kind of stage gate the implementation to minimize impact.
I remember you did some things around picking a small user pool that was sensitive to change, I think it was QA users or something like that.
Could you talk a little bit about what the implementation looked like, how you guys thought about it?
So, the fact that we can do this on a one-by-one host basis is really key. And so part of it was figuring out how we were going to set up the infrastructure for it, how we were going to automate this and manage it at scale.
Being a growing company with growing applications and growing user base, being able to automate it was key.
On the corporate side of our environment, we've automated the deployment of Access with Ansible.
And so being able to create these rules, to have this all managed in an inventory system has been great because then we can just rapid-fire deploy new applications as they get spun up as part of other automation processes.
But in terms of making the migration, yes, simply setting up a couple of different, of our QA applications that were maybe a little bit of special cases and getting those those edge cases worked out.
And then we could integrate pretty much just deploy the rest of them and migrate them over one hostname, one CNAME at a time.
Interestingly enough, on the corp side, we're actually, we have our DNS for our corp domains hosted externally and Cloudflare Access works even in that scenario.
We're able to leverage that in a distributed nature. So it's very flexible.
Yeah, I think that's a... I've seen kind of both approaches, right. It's like start with the weirdest, wonderfulest, most unique applications or start with simple applications.
So it sounds like kind of the criteria you guys used was Let's start with the most difficult and we can work our way backwards from there.
Is that kind of the approach you took?
Because we wouldn't want to standing astride two different systems.
And so if we were able to get high assurance that we could get the weird ones to work and the rest of the ones which were the higher traffic and would have been more sensitive to disruption, those we could migrate later and we knew they would work.
Another piece you mentioned is automation and orchestration. So you guys are using Ansible.
We've seen it with Puppet, we've seen it with TerraForm.
There's a number of different kind of infrastructure service platforms out there, and that's an emerging trend and definitely something I try to evangelize with customers when I'm when I'm talking to them.
I think you guys are definitely ahead of the curve there.
Could you talk a little bit more about what that looks like and kind of how you think about trying to automate that process and at what point it should exist in an application's lifecycle?
You know, for our team, it's really been a matter of scaling and resources and having the ability to dedicate time to the automation lifecycle.
But after moving from like a POC phase of anything, we look to be able to automate it because the demand's around not only just being able to scale and be flexible, but then also around disaster recovery and security and patching.
Like, all of those things, we want to be able to automate as much as possible from an operational load perspective, but then also from a mitigating risk perspective, because then we can act on it in a much tighter SLA.
Yeah, we want to, as soon as we have new applications that need Access, now that we've done the automation, we just simply tie it in and it gets spun up right away.
Even for now, POC kind of applications that we have on-prem. Yeah, and that's I think kind of the beauty of having something iin your security stack that doesn't take a lot of hours to get set up.
It's, in my experience, developers tend to like it because it saves them from having to implement login credentials and all those things.
And they can move quickly and it just spins up in front of their app.
It seems like it gets out of their way more than anything. Yeah, right.
I mean, our teams that are deploying apps and leveraging them don't need to know the details of Access topology.
Once we got our heads around how Argo Tunnel is connected back into Access and how the DNS dependencies got worked out, then the rest of it is just a little bit of YAML and you're off to the races.
It's just a few lines in a config file, right?
And you're good to go from deploying the app. There's kind of a shadow IT prevention use case there too, in the sense that you're able to prevent kind of unknown applications or unprotected applications from getting deployed out in the infrastructure.
I mean, we I think of that more in the internal use case because as we march down the the road towards better network segmentation and per user access control, certainly as you were saying, if you know what you're doing, you spin something up on an IP, you can hop around and you can get to it.
On the external side, it would have been a harder proposition in the past to get something kind of spun up in shadow IT but certainly for tightening that internal posture.
It can, it definitely plays a role. Yeah, definitely.
Because I mean, I've lived that as a product manager. My engineering team is like, hey, look at this cool little internal tool we've spun up.
And I'm like, Oh, I don't know if that complies with anything. Right?
So it's been nice to have the peace of mind. We used Access internally at Cloudflare as well.
It's been nice to have that peace of mind of at least it's getting put behind an access policy and going through the corporate identity provider instead of some of the kind of more classic use cases of, oh, an app got deployed and it may or may not have security, right?
I think it turns you into playing more offense than defense, right? Yeah, exactly.
I mean, it gives you some peace of mind that you wouldn't have otherwise had before of something just totally hanging out there, ready to be hit by anyone who's nearby.
Another piece I'd be curious, kind of, since you guys are further, further down the adoption lifecycle, are there any kind of unexpected outcomes or things you didn't necessarily expect that you would recommend for somebody kicking off an evaluation of, Oh, we need to go turn off our VPN or something like that.
So, I don't know about unexpected outcomes.
I mean, we weren't thinking about it.
The security benefit of being able to lock things down to just routing through Access rather than being exposed to the entire internal network was something that kind of evolved a little bit over time and wasn't an initial criteria that we had set out with, but a good, a good benefit.
But for someone going down the path of Access and replacing the VPN, I would, I think of the biggest benefit as, first and foremost, replacing any application that, or protecting any application that wasn't on the VPN in the first place, because sometimes it's something that needs to be accessed by clients that don't support the VPN connection or whatever else.
And that's the, in my mind, the opportunity for the biggest win.
And then certainly being able to replace the VPN for all the other corporate applications is huge and I would target it from like the biggest ticket items first because there's always going to be something that somebody wants to use the VPN for.
But if you can start cutting off giant swaths of your user population from ever needing to use the VPN, that's like a massive win right there just by itself.
You don't have to get all the way to 100% in order to have success.
Sorry for the mid-dog bark there. We're all going through the days of working from home.
Luckily I'm not having to be on the VPN.
Try to get these guys to quiet down here. But yeah, 100%, I think, I think that's a really, really good point because it doesn't have to be all in one go, right?
You don't have to immediately switch something off. You can get your marketing and your sales team and your customer-facing folks off from having to worry about messing with getting VPN configuration set up right, just access one or two systems.
Your SRE team is probably going to need VPN access for quite a while, but a lot of the organization doesn't necessarily need it if you deploy more of an application-forward mindset, I think.
You know, all companies are not the same, but I think a lot of companies, a lot of their user base primarily used Web-based applications for a lot of their workflows.
And if you can replace those right off the bat or at least have a quick roadmap to migrating those, then you've made huge strides and then you can go down the longer tail of, All right, well what about this random TCP protocol-based application?
Can we do it with, in the Access of the client?
Can we figure out like you can still keep making progress without any kind of compromise or, you know, rushed timetable?
I feel like every kind of application architect or security engineer always has that like bank of five or six special applications that run on some obscure protocol on a desktop or something like that.
But I think you're right, the majority of the use cases can really be solved via the browser, so it really makes sense to start there and then worry about the next round after that.
We haven't had a browser-based application that's just standardized that we haven't been able to to migrate successfully.
So I think someone looking to make that transition could make quick work of that big chunk of the surface area.
One other piece I'd be curious is how you guys thought about segmenting out your access to your applications?
Do you guys use identity provider groups?
How do you guys think about managing that piece?
I know that's a pretty common area of thought.
Yeah, we use for segmenting out access to the applications, we definitely use our identity provider integrations and the controls we have on that side from like a grouping and posture checks and additional roles.
One of the nice benefits of Access that, in terms of flexibility that we encountered recently, was that you can set multiple identity providers.
And so, we're actually still using the same identity provider, but we've been able to set up multiple applications on the identity provider side and classify or group our, the apps within Access and provide different levels of security controls at the IDP side.
And so that flexibility has been really nice, rather than it just being an all or nothing.
Here's your SAML integration.
And then on the SAML side, you have no visibility into all the different level of apps that are trying to be accessed.
And I think that's been kind of an almost an unintentional byproduct of our ability to support multiple identity providers.
I know it's true for both Okta, and actually you can do this with Azure ID too, where you have multiple instances of the same app within an Okta instance and then you're able to enforce different levels of security.
It's kind of like we talked about, Andrew and I were on the call before just catching up.
We talked a little bit about getting to use some of the more advanced security features within Okta.
And I think that's a really key use case here is you can set up multiple Okta instances or multiple integrations with Okta and do different levels of security, Right?
- Right, exactly. We can restrict really sensitive operations in particular applications or even the flexibility to go certain paths off of an application require a different authorization at the IDP side, and we can put controls there to say, Oh, well, it can't just be a username and password.
You need to re-authenticate with MFA every time you use it, or you have to be coming from a managed device that has healthy EDR integration and the - flexibility has been really, really useful.
- Yeah, that's great. Yeah, that's great.
And could you, I think you gave a few examples, but what are some of the things that you're kind of doing to further harden, in terms of either MFA or additional checks?
Yeah, it's really that, it's like classifying our applications by risk, both from the data that exists in there, the sensitivity of it and the power ot blast radius that changes within those applications can be, and then balancing that with the friction to our user base.
Where do people need to access these applications from in order to do their job effectively?
And so striking that balance has been an iterative process and something that we've put a lot of time and attention to detail in.
And so, yeah, I mean, those are the primary hinges.
And then I think someone should just look to leverage as many of the features as they can in their third-party integrations, whether it's an IDP or you had mentioned earlier about the custom integration for Access to be able to call out to a third party service to do whatever validation you wanted on the login.
So just thinking about every... how you can integrate all of the signals and data you already have into those application-by-application Access events on a continuous basis.
I mean, the nice thing is that Access is in the middle of that application flow.
So it's not just seen once like it is potentially on the IDP side.
It's there all the time.
And you can structure your policies around that.
Right, and it makes it a lot easier to to pilot and test out some of the more onerous security controls, too.
There's definitely some merit to doing some dry runs with either hard key-based authentication or certificate-based authentication and things like that in front of...
And also being able to revoke, being able to revoke the session in-flight.
Because otherwise you were dependent on being able to revoke the session at the application layer or at the IDP layer, which, you know, the re-authentication frequency is long, but you can terminate a session while it's active like in between requests if that was such the timing of doing that.
And so that's, it makes it better to be able to have one place in order to do that and have it take effect immediately.
That's great for kind of those, those oops moment moments, right. Of like, oh yeah, we either over provisioned this or this just doesn't look right.
And I think for the folks out there, the way that that works is we issue a cookie out to the browser and we have the ability to immediately revoke that cookie and make it invalid against our checks against that application, which is why you're able to, in any of these fronted applications, just immediately revoke access as it's needed.
Yeah, that's really, really helpful. I think it's been really cool to see you guys' journey kind of in Remote Access and Zero Trust Access for your different applications.
What are some of the things next?
What are you guys thinking about going and doing next?
How are you approaching, evaluating what's next from an application access perspective?
Yeah, the journey this year is really about integrating all of the different signals that we have in order to make better decisions about who and which devices should be able to access our applications.
I think the whole world and ecosystem of security and identity and access management is just kind of catching up to the promised land of knowing details about each device and its posture and being able to perform authentication in unspoofable, non-replayable ways and integrating the signals not only from, oh, is this device trusted and valid, but is it healthy?
In integration with EDR and being able to get details like Is this device encrypted?
Is the OS configured in a hardened fashion? Is that EDR reporting back to its console and is healthy and happy?
Being able to do all those things greatly narrows the exposure of your systems, your infrastructure from the entire world to anybody, anywhere with somebody's compromised username and password to the machines that you know about.
And they're also in a really good, strong, secure state and so that makes it less likely that they could be compromised and leveraged by a bad actor.
It's just, it's just huge and it's just a huge upgrade. Yeah, it almost makes a breach, like, exponentially improbable.
Like, you get to that level when you get to start to build context from the user, the location, the network and the device, all paired together.
It's almost like you're standing next to the user and checking, like, it's almost that level, which is pretty amazing that we're getting to a point in terms of application security.
Yeah, it's very, very exciting and lots to come.
So I think with that, we can go ahead and wrap up.
I just want to say thank you so much for joining Andrew.
Really appreciated the time and appreciate you guys being a continued customer and to the audience, hopefully this was helpful.
And with that, I think we can go ahead and wrap up.
Well, thanks very much. Awesome.
Thank you, Andrew. We have seen malicious foreign actors attempt to subvert democracy.
What we saw was a sophisticated attack on our electoral system.
The Athenian project is our little contribution as a company to say, how can we help ensure that the political process has integrity, that people can trust it, and that people can rely on it?
It's like a small family or community here, and I think elections around the nation is the same way.
We're not a big agency.
We don't have thousands of employees.
We have tens of employees.
We have less than 100 here in North Carolina. So what's on my mind when I get up and go to work every morning is, What's next?
What did we not think of and what are the bad actors thinking of?
The Athenian Project, we use that to protect our voter information center site and allow it to be securely accessed by the citizens of Rhode Island.
It's extremely important to protect that and to be able to keep it available.
There are many bad actors out there that are trying to bring that down and others trying to penetrate our perimeter defenses from the Internet to access our voter registration and/or tabulation data.
So it's very important to have a elections website that is safe, secure and foremost, accurate.
The Athenian Project, for anyone who is trying to run an election, anywhere in the United States, is provided by us for free.
We think of it as a community service.
I stay optimistic by reminding myself there's a light at the end of the tunnel.
It's not a train.
Having this protection gives us some peace of mind that we know if for some reason we were to come under attack, we wouldn't have to scramble or worry about trying to keep our site up, that Cloudflare has our back.