Hacker Time
Presented by: Evan Johnson
Originally aired on September 14, 2021 @ 5:00 AM - 5:30 AM EDT
Join Evan Johnson as he speaks with security professionals about recent security news!
Original Airdate: July 17, 2020
English
Security
News
Transcript (Beta)
Welcome to my esteemed guest, David Haines, and this is Hacker Time. I'm your host, Evan Johnson, and it's a show where every week at 8.30 a.m.
Pacific, 11.30 a .m.
Eastern, we get together and I'll have a different guest each week and we'll be talking about different things that happened in the security world and about the guest's background and anything else that's interesting or on the security people's mind.
So, David, you're our esteemed guest this week. Do you wanna give an introduction about yourself and how you got into security?
Yeah, so like Evan said, my name's David.
I'm a security engineer at Cloudflare. The best way I would describe myself is I'm a builder at heart.
I love working on systems where we want to advance this idea of proactively discovering security issues and building systems to prevent them from happening in the future.
I got interested in security quite a while ago.
I knew for a long time that I wanted to do something with computers.
I remember in elementary school, we would cycle through these different centers and my favorite center was the computer center in the library where I get to play Freddy the Fish and just like mess around on the computers there.
And I was fortunate in high school to take a part in this program called CyberPatriot where it's this afterschool competition where you get to play around with VMs and look for vulnerabilities in Windows and Linux machines.
And then in college, I was able to take a undergrad program in security and do some internships.
And from there, I ended up here in my career. Unofficially, I'd like to state that my interest in security started in 2014 with Heartbleed.
I thought that was a really fascinating story or just like point in time for security because there was this recognition that we have this long running software that we rely on and need to trust, but they can still have these like bugs that can result in catastrophic failures.
And I thought that was really fascinating.
And from there, I kind of started like, following all the blogs, reading all the things, messing around.
And that was kind of like my sort of like first moment I can think of of being involved in security.
I remember Heartbleed too and Cloudflare was, I wasn't working at Cloudflare at the time, but I was hotly refreshing Cloudflare's blog page, reading all about it on the blog.
And that was a big point for me to learning about security and where I kind of had this recognition that all of the tools of old are flawed and not made for the future.
They're not gonna last long. And so I think we've seen a lot of that where a few years later, OpenSSL got replaced with BoringSSL in a lot of places and people are moving away from all the other SSL libraries.
And then you see the same thing with OpenVPN where it's kind of getting replaced by WireGuard and there's probably more on that list.
I'm sure somebody will remake SSH one of these days because it's so complicated.
Yeah. You mentioned CyberPatriot. I didn't know, or at least I didn't remember you were in CyberPatriot.
I got involved in CyberPatriot as a coach when I was in school and I thought it was really interesting.
What'd you think about CyberPatriot though? Did you think it really prepared you or do you think it was a good enough representation of what security is actually like?
So I was one of the first people in my high school to kind of like start our CyberPatriot program.
I was the president of the org. And what I really enjoyed about the program is that it was very hands-on.
I think with security, especially when we're aiming at content towards like high schooler starting in higher education and undergrad, it's very like slideshow heavy.
We're gonna teach you all the terms.
We're gonna teach you what a worm or a Trojan or whatever.
You get those same slides over and over. And what I like about CyberPatriot is that, like, yes, it's purely just like at the time, I don't know if it's changed since then.
It's purely like, okay, you have a Windows VM, like let's lock it down.
Let's find all the bad stuff. Let's run all the tools. Same thing with Linux.
So like, oh, like, you know, is anything looking at, I see password, you know, or whatever, right?
But what I enjoyed was that I was able to collaborate with other people and like debate on like, oh, I think this is a thing.
We should probably look into this.
And then it's like, no, we probably shouldn't or whatever.
I think a lot of that sort of area of security that I think we kind of take for granted is not addressed in most other classroom settings or even in like your solo lab work type deals with like Cybrary.
I think that it's, I think for what it is and what it's going for, I think it was pretty effective, at least for me.
Yeah, I thought it was really cool in the sense that I was a master's student at the time and I got to kind of be in a room with a lot of people who were from like kind of rural Virginia and kind of motivate them that this is something that they can do and show them that they could do this thing.
They could be cybersecurity.
They could have a future career in cybersecurity and kind of show them that it's not so hard.
And I thought that was really interesting. And besides that, I don't remember much.
I remember it took place Saturday mornings and it was brutal for me as a college student, but it was pretty cool the whole experience of being a coach there.
And then I also did the collegiate cyber defense competition, CCDC, and that was pretty good as well as a college student, but it doesn't, I think they should have something like that for high schoolers and even younger.
It's tough though.
I don't know how to best educate the youths of the nation in cybersecurity when- I don't think we give the youths enough credit, honestly.
I think it's easy to sit and assume that like, oh, you know, like they'll never be able to figure anything out, like blah, blah, blah.
And like, I think that at least my generation onward, you're gonna have like these kids that like just completely dump on the prior generations in terms of like what they're able to do with technology just because it's always existed.
Or like, I have lots of friends who had parents who would put these like arduous rules or like what you were and were not allowed to do with their computers or their Internet.
So they would just like get around it, right?
I think we're just gonna continue seeing that to the point where these like programs aim towards like high school and even younger.
I think, well, like there won't be as much of a barrier to entry as we think there is.
Yeah, especially you can just go online and read and start hacking and there's nothing stopping you.
You don't need a concerted program. But I do think that there's a lot that's missing in the school curriculums around computer science and computer literacy, just like there is financial literacy.
Like you get a job when you graduate or you get a job when you at some point in your life and you don't know how to a lot about money because they don't really teach it in schools and computers are important.
I think they should start teaching security in schools.
That's my hot take. So you're here at Cloudflare working on our product security team.
And we've heard about how you got into security but how'd you end up at Cloudflare?
So, I was a big Cloudflare nerd reading the blog for a bunch of years, very classic sort of thing.
I remember at my first internship, that was a sweet position.
And during lunches, I would read a bunch of like forums and whatnot, just like catching up on what people are doing.
And I thought that Cloudflare was operating this really interesting intersection of like security work, but also like the size of the network and the products that they were able to launch and maintain on them.
I thought that like in terms of companies that have that sort of presence on the Internet, we're very limited to these like big, the big five that kind of like are maintaining a lot of things.
And I thought that Cloudflare operating this really interesting middle ground where they had a lot of opportunities to do interesting security work, which is why I was driven to pursue full-time work.
Nice. Yeah, I'm kind of in the same boat. I was working at LastPass at the time and I was reading the blog and was always really excited about the things that Cloudflare was doing.
I remember reading about Universal SSL and after just trying to spend an hour or three hours getting an SSL certificate through Gandhi, and Gandhi done that and I was just going crazy.
Like, why is this so hard?
Let's Encrypt wasn't a thing yet. And then they launched Universal SSL and it was huge.
And I really wanted to kind of go build security, security tools and products for the masses as well.
And then they tricked me and I joined, not knowing I was joining the security team instead of the engineering team.
And it's been great ever since.
Cool. Well, what are you kind of looking, you're doing a lot of engineering work and a lot of just security, general security work, but what's next for you?
What are you looking to do later on? Yeah, so like I said, I'm a builder at heart.
That's something that I want to continue. Specifically, I feel like we can fold stop eliminate some of these like recurring problems that we continue to see with the software that we're developing.
I think honestly, like we can knock out like 80% of the OS top 10 with building out just like a couple of things.
For example, we see that a lot of these like broken access controls problems with people mishandling JOTS or not validating them correctly.
These are the null algorithm.
I feel like VS code should be able to tell me like, you're not using this job correctly.
Like, I'm not gonna let you commit. Or like, we're having these problems or we're continuing to scale our use of like HTTP APIs, especially when we're talking about microservices and Kubernetes.
And we can have a service mesh at the network layer so that we can guarantee like who we're talking to is who they say they are.
But we still have this problem where I can input whatever I want into whatever fields and I'm hoping that things go through correctly.
I think there's a lot of exploratory work that could be done around fuzzing at the HTTP API layer where I know Yelp is doing a bunch of work around like fuzzing with swagger, which I think is really fascinating.
And I think there's a lot of areas there that can be go down.
And then finally, I continue to have conversations with developers about cores and that continues to be a problem.
I would really love to see some tooling to help people wrap their head around cores and to do it properly.
I know this is something you're also passionate about. Yeah, I hear cores in my ears pick up.
I have lobbied several times on Twitter, tweeting the one and only Mike West that they should just rewrite all of the core spec.
I think they started to think about rewriting it and there's like some draft maybe on the web app sec list.
But I believe that, yeah, cores will never go away because so many people rely on it.
My hot take is it's not a problem with the spec.
I think it's the wording of access control header of all the headers. Like I can't even name the one access control origin allow.
I think it is. ACAO, access control allow origin.
Yeah, I think if the wording changed on the headers alone to make it a bit more clear, I think cores wouldn't be a problem.
Yeah, I guess if you changed star to be like a more descriptive meaning, like allow anybody without credentials and also just get rid of ACAC, access control allow credentials.
Like, okay, I want to rewrite it.
I just want to redo it. I think it's a bigger job than that.
That's my focus for the future. Nice, getting rid of the OS top 10 though, how do you, the technology landscape is just constantly changing.
And I feel like whenever there is progress fixing the OS top 10, something new comes along and remakes the OS top 10.
So like GraphQL came out and suddenly a lot of people aren't enforcing authentication and authorization on their specific mutations and their GraphQL APIs.
And it's just a constant rehash of all the same problems of old.
How do you, any ideas on how to fix the OS top 10 holistically? Well, when I think about breaking OS top 10, I think of some of the work that's going into trying to eliminate concerns about memory unsafety with the Rust compiler, right?
Like, we have this idea where we can trust the system there to say that, okay, I can prove that just by writing safe Rust code, you're not gonna have a memory safety problem, right?
Still is a major concern in a whole bunch of other areas, but that's something where this was built and we can use that as a tool to then eliminate that concern.
So I think it's more in the sense of trying to address some of the fundamental reasons why these issues come up in particular systems that can be built and utilized.
But yeah, you have to utilize it to get that benefit.
And I think the GraphQL example is perfect because we have this next step, this evolution, this like, okay, we're in advanced past, Rust APIs, but then we regress in areas where it's just like, oh, we assume it's fine.
So I do think it is a little bit of a push and pull sort of deal, but- One strategy that I've thought about is, and a recent technology, maybe not recent anymore, but a technology that I think did a really good job is React as a framework.
How really the only way you can mess up is with dangerously set in our HTML.
And I think about really opinionated tools, but I oftentimes will talk to people and say, you should build really opinionated tools or you should be very opinionated about security when you're building the software that you own.
And people will kind of say, I want it to be super flexible.
I want my, they have a different mindset that their tools should be flexible.
But I think being really opinionated does go a long way when it comes to making sure that the security is built correctly.
So JWTs could be a lot better if there was just like strong opinions that none is not okay in any library and conforming to the spec, the spec is too big.
And so you end up with all of this cruft that's just impossible to handle.
Yeah. I think the React example is a great one.
There was just like, you have to one type dangerously. And then it's also just like, yeah, like you're going out of your way to handle user data and then put it directly into the DOM there.
Like, otherwise, like it's completely opinionated.
I think that's a really good. Yeah. I did have a conversation with somebody recently and they were like, wait, dangerously is bad?
Like that means that that's a bad function.
And I was like, yeah, that's the name supposed to be.
The dictionary definition of dangerously. Yeah. They just thought it was a cool name for a function.
Well, that was a great introduction, David. And I think the next kind of order of business to talk about is the security news of the week.
And there's one story that dominates the headlines. Which one is it? I don't know.
So much. Could it be that weird Cisco CVE? I saw all these memes about how everybody was paying attention to the Twitter hack is what we're talking about and not the kind of big Cisco CVEs that got released.
But it's definitely the Twitter hack.
And I thought that was a fascinating thing. And I kind of cleared my schedule and just watched the world burn for a while and participated in unverified Twitter where the verified rise up.
Yeah. The proletariat rose up to take down the verified Twitter.
And I think that's a huge area that was so fascinating about this particular hack, right?
Normally when we talk about these big hacks or incidents, it's happened in the past.
It's like, oh, we found out about Equifax after the fact.
And then it was like this whole big thing. With Twitter, it was literally like, I'm watching in real time as like, account has popped, account has popped, account has popped.
Then it's like all verified people can't tweet.
And it's like, you're seeing the full in real time, like it happening. And it almost felt there was an essence of danger.
Like, oh my God, they could come after my account.
Like me with like a hundred followers. I thought that was a really interesting aspect of the experience.
Lock your doors, they're coming for your Twitter account.
Yeah. It was really interesting. It was also like, everybody was in this 300 million person war room, incident response room, watching this happen because Elon Musk would tweet and then it would get deleted and it would come back.
And so you're like, they have persistence. And it was just really interesting to watch.
I thought it was really interesting as well because the internal tooling aspect, there's a lot of armchair analysts watching me, me being one of them and thinking like, well, what's actually happening here?
And it looked pretty, one of the big theories, which was correct, was that it was an internal dashboard that got popped.
And there was some evidence of that from the outside that somebody had access to something that was internal to Twitter that they were using to do this.
And I thought that was really interesting because we've seen stories about in these famed internal dashboards at different tech companies, like Uber had their internal dashboard, New York Times article, and now Twitter has their internal dashboard.
And kind of, I guess, what do you think about the responsibility of these tech companies and their internal tools and how they balance the need to operate and enable the people working there to do their jobs and also balance security with that since obviously you could make $120,000 in an afternoon if you compromise this Twitter one.
Yeah, it's so hard because these tools exist for a reason, right?
That's what our dashboard was designed in a very specific way so that people could get their jobs done, right?
And the reason why I believe that it was purely like, you get your credentials, you get in to do your job is because it's like, yeah, this is how we typically think about these sorts of internal systems.
The insider threat, like threat modeling for the insider threat, it's not sexy.
I'm not sitting down for all day thinking about, okay, if I get credentials, I can pivot to 10 different areas.
Like, of course I can. It's almost like you can go down a rabbit hole of most of trying to figure out new ways of like, okay, how do I grant people access but then also not trust them at the same time?
I've seen a lot of discourse, people talking about like, oh, we need to involve more people.
You know, we have to have this like, it's like third factor authentication where we have a third human come in and say, you're allowed to like change Elon Musk's email or whatever.
I personally have a problem with that idea because in my experience, many of those like processes that involve another human to tell you whether or not you can do something, they're gonna say yes.
You know, you're gonna be getting these requests often, especially, you know, people needing a 2FA reset or whatever and it's like, yes, yes, yes, like whatever.
Like, give me a reason, I'll look at it for two seconds and yes it, you know?
So I think that, you know- That's like half of my job.
In a sense. So trying to find a balance for access is insecurity is a challenge because people are crazy.
You know, I, you can, you can face it. I think we should start thinking about like starting to bottle the Zero Trust model for accessing applications.
And I think one example that I think of when I think of Zero Trust for talking about like specific applications, I really like what Netflix does with Aardvark and ReboKid where you basically say like, okay, and this is for IAM permissions, but just think about this in the sense of like role -based access control.
We have this hierarchy of permissions or roles that one can assume at the top is you can change it on less email and at the bottom is read -only, right?
So why should we state that if you have credentials and you can log in, you get everything no matter what?
You know, if we say role-based access control alone, we say, okay, if you're in the security org, you can do some things.
If you're in the support role, you can do other things, but then that's it.
There's no constant re-evaluation.
I think we should think about it like Aardvark and ReboKid where we say, use it or lose it, right?
You know, if you as an individual need to constantly be changing Elon Musk's password, you have the ability to do so, but most people should not have that ability off the bat.
And what I, how I think that addresses, this specific incident with the insider threat was because the malicious entity here only needed one set of credentials from one Twitter employee to be able to get everything, right?
If there was a use it or lose it sort of role-based access control system, they would have to hunt down the like one or two people who probably have the ability to change emails on verified users, for example.
So that's my take. I think we should start thinking about building like cover your ass systems versus trying to involve more humans.
Yeah, more humans, it just ends up being a big bureaucracy and there's no reason for it of just having more bodies.
It's like, what's the point of adding more cogs to the wheel if there's no purpose for that cog besides saying yes or no?
The interesting thing about, I like that you mentioned RepoKid, shout out to Travis McPeak.
That's an awesome tool. And I hope he's watching because I would love to see a RepoKid like system for internal systems that aren't IAM based, something like Okta or something or your identity provider where you can see all of these logs that are, like the way RepoKid works is it looks at all the logs that are coming across the wire that is CloudWatch and kind of building an idea about all the resources you've touched and then they create a policy for you.
And that's actually way harder in IAM than it is in just looking at like applications somebody used in the last three months.
Yeah, actually, you're right there. Yeah, like how do you figure out which part of a S3 bucket somebody needs?
So I definitely agree.
I think that's a good way to kind of address a lot of these problems. And I also thought it was interesting that POTUS's account, the president's account was not tweeted from and I opined that they probably had some special control in place on just that account since people are also saying that if, oh, I wouldn't have made $120,000, I would have started World War III.
I think the POTUS not tweeting is a bigger story than we're thinking about it, right?
I remember very vividly in 2017 when a Twitter employee was being off-boarded and they went in and disabled his account for like 15 minutes, right?
And so like we can only speculate, but I imagine the idea, the concept of an account being locked to prevent someone from doing anything may or may not have originated from that incident.
But what I also find fascinating is that it's only applied for special case scenarios.
Like I would imagine there's probably, like Jack probably has the lock on it as well if we're going with that theory.
I find it fascinating that we only choose to use a system like that, one, in reaction to a high-profile incident, and then two, only on high-profile incidents, right?
Like what it seems like and why there was a gap in accounts being popped, it seems like it was a email rotation, disable 2FA, and then log in and tweet.
I want to thank you for joining me. I appreciate you joining me on short notice.
And the invitation is always open for you to come on and talk shop.
And I kind of want to do like demos or something at some point or show off security tools or something.
And I think that would be cool too. Not sure. Yeah, totally.
That was great. Yeah, thanks.