Creating Usable Security for Everyone
Originally aired on February 2, 2022 @ 9:30 AM - 10:00 AM EST
Best of: Internet Summit 2018
- Adriana Porter Felt - Engineering Manager, Google Chrome
- Roselle Safran - President, Rosint Labs
- Moderator: John Graham-Cumming - CTO, Cloudflare
English
Internet Summit
Transcript (Beta)
Okay, welcome everybody. There's lots of seats up the front here, if you've got coffee you can come and listen to these two folks.
All right, well I'm very, very happy to have with me two people who know more about cyber than I do.
Far away from me is Roselle Safran, who is president of a company called Rosint Labs, which is a cybersecurity consultancy and she's also an EIR at Lytical Ventures and they specialize in cyber investment, so it's cyber all the way down there.
But she was also cybersecurity operations branch chief at the executive office of the president during the Obama administration, and I think what that means is running and protecting the White House from the actual White House cybersecurity perspective.
We're going to talk about that in a minute.
And then close to me is Adrian Porterfelt, who's an engineering manager at Google on Chrome.
It's on the cross -device team, right, so on all the different Chromes.
And one of the things she specializes in is how to make hard-to -use web technologies easier and more understandable for people.
And so we're going to talk about those two things because I think one of the big challenges is how do we make security usable and how do we get...
because people are the problem, mostly, with cybersecurity. How do we help them?
We're the problem, really. We're the problem, yes. Well, we're people.
I'm people anyway. So I'm going to start with Rozelle. So Rozelle, so that sounds really exciting, right, cybersecurity operations branch chief.
What is it actually like defending the White House?
What does that mean to defend the White House?
Yeah, so it was a really exciting and really intense experience. I mean, every day felt like a week.
And my concern was always that we would end up with a breach that would be front-page news.
And that didn't happen on my watch, but it did happen many months later.
So it was a completely legitimate concern. So the work that I did was a combination of being very tactical and making sure that any attacks that were coming in were being addressed properly and very quickly.
Any vulnerabilities were being addressed properly and very quickly.
But then also trying to divide my time with some more strategic efforts to just improve operations all around.
When you're working as a defender, you know that the adversaries are always upping their game.
And so there's this constant pressure on your end to always be improving.
Then it was a 24 by 7 shop. So there wasn't a night where I wasn't at least on email, if not on the phone, checking in with the team.
And during the government shutdown, I was actually on the night shift. Not a big fan of that shift, but you do what you have to do.
But it was a fantastic experience all around.
Adrian, I wanted to ask you about something. So she's defending the White House.
You're trying to get all of us using Chrome and ultimately the web in general to be more secure.
So what are the challenges of that?
What does that involve? One of the key challenges we have is that security is both very technical and also very personal.
Everyone has different threat models, but they may not even honestly really, they don't think about their threat models the same way that security experts do.
So one of our challenges is trying to figure out how to build user -facing software that can be understood and used by billions of different people who have very different security needs as well as a very different levels of understanding.
And you can't make, it's hard enough to get five people in a room to a consensus, you can't make billions of people all happy at once.
So you have to figure out how to strike that right balance between building a product that's still usable by everyone.
One of my favorite stories is we had made some changes to how you access a certificate viewer in Chrome.
And we saw from our metrics that frankly not very many people use this feature.
However, they seem to all know where I lived. I was at the playground with my baby and another parent there started arguing with me about how we'd move this certificate viewer.
So yes, there are lots of challenges because power users want a lot of richness and functionality and an ability to control their online experience.
But sometimes that can make it harder for other people if there's too much information, too many options, it can become confusing and overwhelming.
So we're trying to strike this balance between giving power users the functionality that they want and that they need, but also being clear and simple and usable for everyone else.
And I'm assuming, Rozelle, there are parallels here because you've got people coming into work in the White House who are not technical, different levels of skills.
So how do you help them secure themselves and secure the White House overall?
Yeah, so you're always, as a security professional, you're trying to strike this balance of what covers your bases for a strong security posture and what still enables the organization or the person to do their job effectively.
And it's a constant give -and-take. To an extent though, there is an element of educating the user on the value of adding in some security controls and how the risk associated with not having it is far greater than the little hiccup it causes in operations.
So this is an issue that I see across the board. If you were to tell someone, hey, you need to make sure that you secure your car, and the person would say, yes, of course.
I lock my doors, I have a car alarm, I have the club.
And if you had a business owner who said, hey, you need to secure your business, then the business owner would say, yes, of course.
I have locks on the doors.
It's a larger business. They also have security, security guards, surveillance.
But when it comes to an individual, if you say, hey, you need to secure your personal data, then the value of doing that becomes less clear.
And the answer is more of, well, if it's not going to cause me inconvenience.
And for the businesses, it's, well, yes, I'll secure my business to the extent that I don't have to pay too much for it.
Which is really problematic, especially for small and medium-sized businesses because there have been stats that have shown that for small and medium-sized businesses, when they have a security breach, 60% of them end up going out of business six months later because of that.
So there's a real existential risk to not having security controls in place.
But that's often difficult to articulate to some people, whether individuals or businesses.
And I think as security professionals, we do need to do a little bit of a better job of making that clear.
It's not as tangible as losing physical property, so it's a little more difficult to convey that message.
But it's something that has to be made clear. There's a little bit of give and take.
And while the security team can do its best to minimize the amount of disruption and interruption with security controls, there is going to be a little bit of an issue on occasion, depending on what the control is and how it's being implemented.
And Adrian, one of the things that Chrome did is this little nudge for people, which is to change the way in which we think about secure and insecure websites.
Can you talk a little bit about that? Because that's an interesting...
It's a subtle change, but it's trying to change people's behavior through just something in the UI.
For a long time, through the first 15-ish years of the Internet, almost everything was served over HTTP without TLS, meaning that your data in transit between the client and the server was visible to other people in the network or tamperable by other people in the network.
And over the last several years, there's been a big cross -industry push to move the web to HTTPS.
And when we first talked about this a few years ago, we felt people didn't really understand that when they were using an HTTP website, that their information was unprotected in transit.
And what we wanted to do right off the bat was tell people, oh, your connection is not secure, your data is not secure.
It turns out that would have freaked people out if you had shown that on, at the time, something like 70% of all website page loads.
And over time, as the amount of HTTPS traffic has risen, we've been able to flip this paradigm of instead of rewarding websites for using HTTPS by showing a green lock, instead trying to tell people when a website isn't secure and assuming that a secure HTTPS website is the default.
Because it is the truth that today, HTTPS is a default and people should just expect it.
They shouldn't have to check for a green lock or special wording that says secure, they should just come to expect that as a given that their Internet traffic is being, you know, is going over HTTPS.
So now, what we're doing is we're moving to a world where we tell people more and more aggressively when their information is not secure in the hopes that they'll notice it and consider not entering their information or maybe that means that that website has an HTTPS version that they can switch to.
And are you measuring that? Do you see people saying, hey, what's going on?
I don't understand or how do you know that this makes a difference?
That's a good question. We use several different research methods to understand what happens with people.
We have in-browser telemetry for people who opt in so that we're able to see in aggregate how people are responding to the changes.
We also proactively do user research through studies, one-on-one outreach.
We read the help forums. My team is on a rotation to read what people say in the help forums, which is a very good way to connect with users.
And we do see that people have noticed this change. We have to strike a balance with how severe the warnings are because some people will see the words not secure in red and they really get frightened.
They might even turn off their computer and walk away for a few minutes.
You can really accidentally scare someone too much.
So we have to be careful. We want something that can be noticeable but trying to avoid those negative reactions.
We try to do most of this testing, of course, before we make changes on the stable channel.
So we'll proactively do a lot of this user research before actually launching to everyone.
It's interesting you say that because my mother got a certificate warning on a website on her PC and she shut down the machine, called me, and switched to using her iPad, which does not give you a certificate warning.
And it turned out that the battery was dead in the real-time clock on the PC and it thought it was 1970, and so there was a problem.
But her reaction was, my machine has been hacked. So we have to be careful. So something you mentioned, which was the threat model, and what I'm wondering is now you've gone from the White House now into consultancy.
It feels like the threats must be very different, but what is the overlap between the threats that corporations have and even individuals that also you learned from the experience in government?
Yeah, there's actually not as much difference as one might expect. Often the same types of actors are attacking government agencies and private companies, especially when you look at some of the nation state actors.
So a nation state actor, their motive could be to steal intellectual property.
It could be to have a better understanding of government policy and how it relates to that country.
It could be any type of information that they could use to further their agenda as a nation.
And so certainly government has plenty of that information, but many companies have that as well.
When you look at, say, a defense contractor or a large financial company.
So sometimes you wind up with the same types of threats, and it becomes really critical that threat intelligence is shared amongst the communities as well as possible.
So before I was at the Executive Office of the President, I was at the Department of Homeland Security in a division called US CERT.
And part of its mission was to improve the security posture of government agencies and critical infrastructure.
So there was this constant flow of information both ways.
Because often you find attackers will use the same techniques, regardless of who they're going after.
They'll use the same malware, they'll use the same attack vectors.
And so if that information is shared across different entities, then it works as sort of a vaccine to immune the larger community once one person is hit and can spread that information quickly.
And how should companies or even individuals think about the threat model?
I know for my house, I think about the threat model as burglars, and I know what the solutions are.
I lock the door and I make sure the bathroom window is closed. How can we get people to even think at that level about what they should worry about?
Right, right.
Yeah, so from an individual's perspective, it could be as simple as the type of attacker that's going for financial gain.
So they're going after your bank account or your identity.
And I've had people come to me and say, I'm not worried about that.
I have $1,000 in my bank account, nobody wants that. But the reality is $1,000 in US dollars can go a really long way in many other countries.
So your $1,000 may seem like chump change to you, but it's actually very valuable in some places.
And besides that, there are many attackers that realize you start with a low-hanging fruit and work your way up.
So if you're someone that, say, has a contact list with some people that are CEOs of corporations, for example, you're the easy target in.
And the step that they need to get to the bigger fish that they're trying to reach, and that's a common tactic for some of the more sophisticated actors, is to go after the unsuspecting family members or unsuspecting friends.
And once they gain access to that email account, for example, then they can send an email and it looks like it's coming from a friend or family member.
And the person that's the target is far more likely to click on the link, click on the malware.
So sometimes people don't realize that they could be a target. They think they're just a little guy, no one is concerned about them.
But there are many times where they're a bigger target and more interesting than they think they are.
One thing I was curious about, did you have to explain this to President Obama and did he have a good password?
I don't talk about anything related to the data.
Very good response. I hope that she doesn't know Obama's password. I hope she doesn't, too.
But you were saying that that's actually a real problem for people, is interpreting that string that's in the browser.
I mentioned that for a lot of people who are sitting here in this room, it's second nature for how to read a URL.
You know what a subdomain is, you understand that there's a difference between the path and the subdomain and then the domain itself.
You understand maybe how to modify a query string, right?
If you're on the New York Times, you might know that deleting part of the path or deleting part of the query string on Amazon is gonna get you something different.
And this is a very powerful tool.
URLs are a very powerful tool for people who know how to read them and know how to use them.
They can also help you identify phishing attacks if you know to look at them.
And also you know what to look for when you look at the URL. And we've been trying to think of ways that we can simplify the ways that URLs are displayed while still giving power users all this information and all this control that they want.
But at the same time, also making it easier for other people who are not power users to be able to read and use the URL as a way to prevent phishing or as a tool for understanding what website they're on.
And this is a very tricky space, because it's very hard to make both of those groups happy, to be honest.
But we're trying.
We're putting a lot of effort to talking to people to try to understand how we can build a UI that meets both of these needs.
One thing that's interesting though in what you're doing is that there seems to be, for me, there's an underlying emotion, which is empathy for the user and trying to help the user.
And I think often in security, empathy is lacking, because it's very much a do this thing, you've got to use a good password, you've got to do this sort of thing.
How can we get more, this is really for both of you, how can we get more empathy into the whole security space so people understand that in order to get people on board, we have to understand their point of view and where they're coming from?
Yeah, so that's one of the big challenges of a chief information security officer today is being able to convey the risk so people understand why we're saying what we're saying and requesting what we're requesting.
But then the flip side of it is that as security professionals, you have to understand our function is to make the environment, the optimal environment for getting work done.
And there's a little give and take that's required on both sides.
But I think that security professionals sometimes are seen as the stumbling block, the ones that are always saying no.
And there's more that can be done to be creative about solutions so that at the end of the day, both sides may have to do some compromising, but are happy with the outcome.
And part of that comes with having these conversations. And if you're on a security team, it's easy to just be siloed with your other security team members.
And think the users are just up to craziness again. And having the conversations and understanding what the users are trying to do and why they're trying to use it and why they need something a certain way, it makes a big difference in understanding how you can reach that compromise.
I also think a big part of it is people tend to have two reactions in the security community.
One is saying, well, we just need to teach people to do this.
And if you find yourself thinking, oh, we just need to teach people to do this, think really long and hard about whether that is a problem with them not knowing enough or a problem with your software being too hard to use.
And the second is when you see user pain, like if you're looking at a help forum or you're watching a user video, I think it's also very tempting to think, oh, that user's just dumb.
Other users get it. Don't think that, right?
Your users are not dumb. They're often very smart people, maybe even smarter than you.
But you're finding your software hard to use. So you have to keep both of those instincts in check.
And I think sometimes it can honestly be hard.
But remembering that the problem is your own software and being open to critical feedback that your software is hard to use, rather than jumping and thinking it's a problem with the person using your software.
Okay, great.
So as we near the end of this, I'm gonna give the audience a chance to ask some questions.
I wanted to ask you from, you know, you both had a lot of experience in real security situations.
What are the common mistakes that people make when they think about security that, you know, we can learn from and hopefully not make those mistakes?
There's a long list.
So plugging in thumb drives that you don't know about. Yeah, it's a big thing.
Should we have a show of hands? Yeah. Who's done that? And I know that sometimes they're given out in gift bags at conferences and whatnot.
But there are some threat actors that will just drop thumb drives in parking lots because they know someone's gonna plug it into their machine and then you have an infected network right there.
So that's a common one. Of course, the main attack vectors are often email and websites.
So a person receives an email, looks pretty legit, they click on a link or click on an attachment.
Now, from my perspective, I want the user to have to do as little scrutiny as possible, so getting into what you're saying about whether they need to parse out the URL or not.
But the reality is that you can have many lines of defense.
You can have a very robust email gateway.
There will potentially be something that still slips through the cracks and it makes it to the end user.
So the end user does have to have the ability to discern what looks suspicious or fishy or not.
And there are now plenty of companies out there that are focusing on this security awareness for the end users.
So just having an eye for what you're clicking on or what you're acting on, business email compromises are becoming really popular where someone in the finance department receives an email that looks like it's coming from the CEO saying, hey, you've got to wire $50,000 immediately.
And not having a process for dealing with that outside of just saying, okay, where do I wire it?
That's another issue that is a common type of attack vector.
And, yeah, installing antivirus. A lot of people think, oh, it's just going to slow down my computer.
But it's helpful.
It's really helpful. Even on Macs. I know people think they're immune to viruses, but they're not.
And so, yeah, so those are just a couple of the common attack vectors and the way that people can take some action and improve their security posture in pretty low-lift ways.
Adrian, are there things you've seen people do with Chrome where you do actually pull your hair out and say, why did they do that?
Or is it all empathy with the user? I'll make fun of myself here since I'm also a user.
For the longest time, I was too prideful to use a password manager.
I said, I don't want a single point of failure. Instead, I am going to create this scheme for how I will come up with a unique password for every website and remember them.
Anyway, then I got locked out of basically every account. And I discovered I actually did already have a single point of failure because the way I reset all of them was with my Google account, my email.
So now I use a password manager.
In fact, I use Chrome's password manager, but you don't have to use Chrome's password manager.
You can use another password manager. And I found that it's much easier to actually have many, many different unique passwords that actually get remembered and I don't get locked out of my accounts.
And I've been trying to encourage other people to, particularly people who are more power users like myself, it can maybe be a little weird to feel like you're giving up control to a password manager.
But in practice, I think, given the realistic threat models, it's one of the easiest ways to keep your accounts secure.
No, it's funny you say that because I went through exactly the same thing.
I had a scheme, I had a whole thing, and now I use a password manager.
All right, let's see if we can get a couple of questions from the audience.
There's one right down here in the front.
Yeah, who's going to run this one?
Oh, we're going to take this question here, Dawn, we'll do this one, and then we'll do this one over there back.
Hi. Thank you. So, one, when I'm on a desktop, I can hover my mouse over what might be a URL or might not be because it's actually a GIF that looks like a message, and I can see what the URL I'm looking at and say, well, that's not what I want.
It's .cn and not what I want.
How do I do the equivalent on a portable device like an iPad or an iPhone or a Google, whatever it's called?
I can't see that. There's no place to hover my finger.
So, I can't figure out what URL I'm going to look at except by clicking on it, which may be too late.
So, I suppose the concern is that you're worried that you might be hitting a phishing site or a site with malware.
So, our general assumption is that people will look at the URL after they have actually opened the website.
Yeah, too late. I mean, I'll be honest.
Realistically speaking, I don't think very many people are going to check where a link leads to before they actually click on it.
So, that's good feedback.
Thank you. I think there's a question. So, I wind up just holding off on looking at those emails until I'm back out of computer.
Interesting.
All right. I think there's a question hiding behind the pillar. Is that right?
Yeah. Sorry, I'm behind the pillar. Hello. I thought it was really interesting.
You talked about the importance of not freaking people out, not kind of sending them running away from their laptop when they see a big red not secure sign.
I've heard it suggested that younger people, so maybe who are not that technically savvy but are very used to the Internet, actually kind of go too far the other way.
So, they're a little bit naive about their security. They kind of think, oh, it's fine.
You know, it's not going to be that much of an issue. Do you think, like for both of you, the way that your companies or general security, the approach will change over time based on the different level of kind of user experience with the Internet and, you know, that level of kind of alarm that you want to impart into people is going to change?
So, I personally thought that by now there would be more of a general understanding that cybersecurity is a concern and does need some attention.
And part of my thinking was as ransomware becomes more prevalent, then the threat becomes more tangible in a way.
People can see that their machine is bricked and now they can suddenly make that connection of something that they do in the cyber realm affecting them.
Personally. So, I thought that there would be more of an uptick by now in this acceptance of the fact that cybersecurity needs to be considered.
But, of course, I'm a little bit biased in all of this.
And so, I think there's definitely still a way to go. Part of my concern with the mentality of, well, if the default is that it's good, it's not a problem unless I hear otherwise, is that that can work for the Googles and the Apples of the world that are really paying close attention to cybersecurity.
But when you look at, say, for example, some of these IoT devices, they're not considering cybersecurity at all.
They're getting a product out to market. And cybersecurity is maybe tacked on after, maybe not even included in the technology roadmap.
And so, having this view of the default is it's not a problem, then becomes even more of an issue because it really is.
So, one thing that some people find surprising sometimes is that most people actually do heed warnings.
I think there's a common perception that people actually, oh, everyone sees a warning, you're going to click right through it.
And in fact, we see most people are actually conservative. I think less than 20% of people click through certificate warnings and well below 10% of people click through phishing and malware warnings.
I'm not aware of any generational effects there.
I don't believe that younger people are more or less likely to proceed through warnings, but it's an interesting question.
All right.
We are 10 seconds away from the end of this session. So, Rozel, Adrian, that was fantastic.
Thank you very much for joining us. Thank you. Thank you for having us.
Thank you.
Thank you.