Originally aired on September 2, 2023 @ 4:00 AM - 4:30 AM EDT
This week is Cloudflare's Founder Spotlight on Cloudflare TV, featuring dozens entrepreneurs from across the tech industry and beyond!
David Brumley is the CEO and co-founder of ForAllSecure, and is also a full professor at Carnegie Mellon University. Brumley's ambition in life, both in academia and at ForAllSecure, is to help solve the cybersecurity workforce shortage by making appsec autonomous. Brumley has received numerous awards for his work, including a United States Presidential PECASE award from President Obama, a Sloan award, a DEFCON black badge, and an exhibit at the US Smithsonian museum for his autonomous appsec engine Mayhem. Brumley also co-founded the philanthropic picoctf.com high school hacking competition, which is used by tens of thousands of high school students annually. Brumley stays active in the hacking community as an original member and advisor for PPP, a world-ranked competitive CTF hacking team.
Visit the Founder Spotlight hub to find the rest of these featured conversations — and tune in all week for more!
English
Founder Spotlight
Transcript (Beta)
Alright, welcome. We are live. I am Evan Johnson, Director of Security Engineering for Cloudflare's product security team.
With me is the esteemed Dr. David Brumley, the CEO of For All Secure.
And David, would you like to give an intro about yourself and about For All Secure?
Yeah, I'd love to. So I'm David Brumley. I have a PhD in computer science from Carnegie Mellon.
I was a tenured professor, and we've been working on new tech for over a decade on how to automatically find exploitable bugs in software.
And we've taken a really interesting arc with our company, trying to take that research through what's called the valley of death, where things just, good ideas die, and build a company around it.
And we've been pretty successful.
We have a number of customers today. So today, what we really focus on is, how do we go about solving some of the workforce problem where we can automatically go find vulnerabilities, not just find them, but prove them by creating a proof of concept.
And one of the cool things about it is it also grows your test suite, which sounds kind of boring to a security person.
But as you know, you never field untested code, even if it's a patch for a major security vulnerability.
And so that actually helps out developers as well.
Nice. So this is a part of Cloudflare's Founders Spotlight Week.
And I've talked to a number of founders this week.
We had Calvin French-Owen from SegmentOn, who is the co-founder of Segment.
We had Jason Laster from Replay, who I need to introduce you to. You all are doing some really interesting things that are kind of close together, where he's making a time...
Replay is building this time traveler, traveling debugger. It's very interesting.
And then we had Josh Curl, who is the CTO and co-founder of Hightouch Data.
And you are the only founder that we've had on who's specifically security, that I've interviewed, that's specifically in the security space.
And so I think it's really interesting how hard it is and how few innovative companies are in the security space.
And so I'm really happy to have you on.
And you just mentioned this valley of death in academia research. What is... I've never heard of this.
Is that what it is? Oh, we call it the valley of death. So what will happen in research is you'll get a grant, a big one.
You'll have graduate students work years and years on how to advance the art of something.
And you'll write papers and show how great it is.
But then when you're in a PhD program, your job is to write that paper and then move on.
And so no one ever goes that extra mile to bring it to practice.
The naive idea when you're in academia is, if I have an earth-shattering world problem-solving idea and I write about it, someone else is going to go through all the work to commercialize it and make that happen.
And that's really the valley of death is going from this great idea that moves the needle.
You can even measure it and get peer-reviewed publication to all the hard work that goes into building a product around something.
And until you build the product, you really haven't solved the problem, in my opinion.
And that's the valley of death going from A to B.
Yeah, that's so interesting. And I know that other areas have this valley metaphor.
I know in marketing, there's that famous book, Crossing the Chasm and Adoption of a Product.
I haven't heard that one, though. That's really interesting.
And I actually heard this. I was having a conversation with somebody yesterday who said, almost you want to go read these papers in academia and in computer science research and machine learning and security, and then do the opposite of what they said.
Because in academia, you make a machine learning model that's 99% accurate with 1% false positives.
And then you roll that out in production.
And 1% false positives, you're looking at billions of requests or billions of whatever you're doing with the size of data sets today.
And you might be proud in the academic context and then completely crushed in the professional context by that kind of data.
I think that's one of them. That's the base rate policy.
We're like, if you have a 99% accurate test, you can still one in every 100 is a false positive or false negative.
And someone says you have the disease, what's the actual chance?
You have to look at the base rate of the disease. So this is from radar theory, but we see it in intrusion detection.
We see it in machine learning.
So totally grok that. I think that's part of the problem. The other part of it is even if you don't have that particular issue, there's a big difference between the closed world and academia and what you have to handle in real life.
And there's in computer security, one of the papers I always enjoyed, not even related to my company, was about the Coverity co -founders.
So Coverity started in Stanford and they had this great paper where they found all these bugs in the Linux kernel.
And so they went and commercialized Coverity.
And the paper was like, here's all the problems commercializing that we never even thought about.
So in academia, you do all your research and you compile everything GCC.
And they're like, we went out and your first big deal, they don't use GCC, they use Borland compiler.
And there's all these different versions that you have to deal with.
They were just like the stupid, simple problem of how do I parse the code?
You never even think about it.
How do I build the code? Until you solve that, you have no product. And so it really stuck with me, this problem of until you go out into industry, you don't actually know what are the barriers.
And sometimes they're technical, but a lot of times behind those technical barriers are just a whole bunch of, I guess I would call that even though it's compilers, non -technical barriers.
It's an engineering problem, but it's an important one.
Yeah. I guess I would describe it maybe as a product problem where it's how do you get the thing that the meat of what you're doing in front of the customers and in a nice way.
That's really interesting. I didn't know about this paper.
I'm going to go back and read it because that's something that I think about regularly about just security companies in general and how hard it is for them.
So you kind of touched on this. I'd love to hear about the arc of For All Secure, especially the origin story or how it got started and back in academia.
What did this look like when you were doing research? Yeah. We were just chatting about for me, I was the computer security officer for Stanford from 1998 to 2003.
So just to date it, this is when it was called google.stanford .edu to everyone out there.
And at that time is actually when I decided to go back to academia because I got a boss I didn't like, and I said, hey, I want to be in full control of my destiny.
So I went back, got a PhD. The other thing that I realized I could do is become an entrepreneur.
So it's kind of funny that I'm doing both, but what we were doing research in was really the problem I ran into when I was a CISO, which is an attacker would go find a brand new vulnerability, come up with an exploit and break into all the machines.
And so what I wanted to do is find exploitable bugs before attackers.
And this was really the way I see it as actually somewhat precisely crafted.
I didn't want to just find vulnerabilities.
I actually wanted to find which ones an attacker could exploit. And the second part of it in academia is we wanted to be able to do this with compiled software.
And this is a little bit radical. And originally we were called a little bit unethical.
I wanted to take Microsoft code, even though I'm not Microsoft, and I wanted to be able to find zero days.
I wanted to be able to take vendor code from anyone and be able to go find zero days because after all, when I was a Stanford computer security officer, I'm sure you can appreciate this you're the one who suffers the issue, not the open source author of the commercial vendor.
So we did research in that, and that was kind of phase one of the company. And we got best paper awards and it was great funding and so on.
And what happened is very luckily, DARPA had something called the DARPA Cyber Grand Challenge.
And their challenge was, can we make cyber fully autonomous?
They had this for self -driving cars, they had it for robots.
And so here we are with research and I can take a compiled binary and I can prove what's exploitable and actually create a POC.
And so we entered that contest. And so phase one of the company was moving from academia to win that.
And so in academia, what we cared about was how do we write a paper?
How do we do just enough to show state-of-the-art while not it becoming an engineering problem, right?
Like any skilled practitioner could do it as well.
And I think that was a really good move that we did because we were really focused and we had a scoreboard, right?
You either win or you lose in these things.
And it's something that I think when you read business books, I read more business books, having a scoreboard is so important.
And so the cool thing about that is what we started to worry about are things like reliability.
How can we make sure our service is never down?
Finding exploitable bugs, but also patching them. And this is when we started doing work in well, even if I find the exploitable bug, the reason I don't feel the patch is because I have a bad test suite.
And in this contest, you're given a binary, no human intervention, no test suites, you have to create it on the fly.
So we use the same sort of techniques to find the exploitable bug and build the test suite.
When we had a patch, we would auto patch it, run against the test suite and make sure that there was no performance loss or functionality loss, as well as security fixed.
And what's kind of cool about that area of the company is we're really focused on how do we win these games kind of still in the abstract, but it was a real system.
The third phase was after we won that, we were again naive.
This is the second naivety. The first one is you write a paper and you think everyone's going to do it.
The second one is you win this contest, everyone's going to do it.
We decided to make a company and a real product around it.
And that was when I began to appreciate probably more fully what it takes to really build a product.
So we went and we raised money from new enterprise associates and we started to build that product.
And what was really interesting is just as in the company, the type of people you hire, the type of company you are, the culture changes.
In research, it's very academic oriented.
When you're doing cyber grand challenge, again, very research oriented.
You wanted to prove that you had the best algorithm. In your product company, a lot of those problems go away and it's actually a culture shift.
And one of the hard parts is if you have people who only care about technically the best results, you have to start augmenting them with people who care about, does it work for the customer?
And that's just a different mindset. So that's been kind of the arc is we went to NEA, we got it, we built the product, and then we've been commercializing it and selling it for the last two years.
And Cloudflare has been a great adopter of it, but we also sell to companies like Roblox.
And kind of interestingly, the Department of Defense uses us to check a lot of things.
So we've been used from everything from video games to literally missiles.
Love it.
I think, well, there's so many things that I want to dig into. That's a fascinating arc that I didn't actually know all the details of all of that.
One technical question here, digging in before I get to the whole arc and questions about that.
So patching the vulnerabilities live in this challenge, but you're patching a binary.
Are you just adding instructions to skip the exploitable section of the binary or?
Yeah. Well, okay. So first, what they call the patch is not like a human patch, right?
It's so you just don't get owned. That was the definition. And so what was really interesting about it is you have a binary and you have to rewrite the binary, which is considered a hard problem, statically rewriting it.
So we would do that.
And so we actually had three or four different methods. Part of it is we would go in and try to add new detections.
And if we thought there was an exploit coming in, we would safely exit and restart.
We could try to patch the actual vulnerability.
And so what was really interesting about this entire gameplay thing was after we found the exploitable bug, the patching was the easier part, but we wouldn't just patch it using one technique.
We would actually create six or seven different candidate patches.
So we would add in stack canaries, ASLR, we would try to detect the exploit.
We tried to patch it and we replay the test suite and measure which had the best performance and functionality.
And then we'd field it. So there was never just like one patch. We would actually create a bunch of them and take that test suite to say which one is going to achieve the highest score.
In business, we would say maximize the business advantage.
Nice. That's fascinating. And then actually, the way I met you about two years ago is kind of a funny story in that arc is you mentioned NEA was an investor.
And I think somebody at NEA had emailed our CEO who was like, hey, what do you think of this company?
Is this something that would help you all? And then I think that our CEO, Matthew, forwarded it to my boss who forwarded it to me.
I just got the trickle down effect of not my problem. And like somebody else look at this.
And I saw it and saw that we were working on some fuzzing things at the same time.
And it was very fortunate timing. And I said, definitely going to reach out and learn all about this.
And I think that was about two years ago or two and a half years ago or so.
And then on that arc, I guess, what are some of the challenges that you've had productizing this and getting it to be something that helps an industry?
And I guess, how has the product changed over that time? Yeah. Well, I can tell you one of the challenges that we face is really just telling the story.
When people really understand what we're doing, I think that they get it. And fuzzing is actually just the tech.
We're not a fuzzing company. What we were doing was we were making an autonomous security solution.
And to do that, you had to have zero false positives.
If you think about autonomous driving, you don't want something that thinks there might be a rock, but 25% of the time, it's really not a rock and it swerves and hits the other car.
And so we started to build this up. So telling that story, if I can tell it personally, people get it.
But scaling that up has been difficult because security is a very noisy industry.
And so when we go and we tell people we have zero false positives, first, they're either disbelieving or they've heard this from other vendors and it's not true.
And it's like, look, I'm not just claiming this.
What we do is we actually give you an input that triggers the bug.
The other part of it is I have to tell people something that they probably don't want to hear, which is we won't find all bugs.
And I think that's a unique message.
A lot of people in security are, I guess, conditioned through history of saying, I'm going to run a product.
I'm going to get a list of all the problems that could ever occur.
And if I could only go through that list, I'm secure.
And industry veterans know a little bit better than this. They understand it's moving faster, not this checklist.
But I'm very upfront about that message.
And so really getting that effectively across has been one of our challenges.
Yeah. You can't buy security and just be perfectly secure. It's about getting tools that help you cover your greatest risks.
So I guess that is an interesting message because I'll get a phone call from a cold reach out from some security vendor and it'll be like, we can solve all your problems.
And it's just click.
There's so many companies in security just selling their product a little too high or way too high.
They are. And I mean, that's kind of business as usual.
It's a really interesting market force. So the way you typically sell even today to a company is you call up someone like yourself and then they say, okay, go find a vulnerability in some of our software.
And it almost is a bad market force because that means given any piece of software, you want to point out something.
You want to find something wrong. And I think if I was ever in your shoes, what I would do is I'd come up with some NSA proven crypto algorithm and ask them to find a vulnerability in the crypto where it's impossible because you know they're going to point out something.
Yeah. You give them some formally verified AES implementation and say, break it.
And if you're so good, find an issue. And most probably would at least spend an hour trying to find it before they realize that they've been duped.
Yeah. So I think what we've seen in industry, and I think what I loved about the CGC and Log4j actually is a good example of this.
Security in my mind is about speed.
Once you find out about an issue, as soon as you have certainty that there's a real issue and then you can get the wheels in motion and get a fix out, that's really how you become secure.
And so that's what a lot of our tech is focused on.
I think the other tough part is just saying no. Probably half of the time people are asking me about something, I have to say no, which makes me feel bad.
I think to give you a perspective, I would never do something I did in academia that I did in the company.
Someone came to me and said, can you make Mayhem work on an embedded OS and I'll give you $2 million.
And I had to say no. And I said no, because I didn't have a market to sell past that person to at the time.
They wanted to run Mayhem on their own special device. Is that... They wanted to run it on an embedded system.
So the hard part is we're actually taking the software and you have a technical crowd, so I'll go into it.
We're taking a Docker container and we're running it a thousand times a second, really fast.
We're feeding it an input.
We're doing amazing analysis to try to learn how that application is processing that and learning from that.
And then coming up with a new input to trigger more code, to get more code coverage.
Because in some sense, all an exploit is, is an input for some previously untested code.
Like if you had to test for it, you shouldn't really call it an unknown vulnerability.
And there's all those components there that would have to run on a...
I see the complexity there. Yeah.
And so we go to people and we're like, you're giving us an Arduino, essentially.
We're not magic.
As soon as you figure out how to virtualize that so I can run a thousand copies of it in the cloud, I can help you.
But I'm not going to... It's a research problem to do that.
Right. Awesome. I'm curious, you kind of mentioned the narrative.
Actually, I was talking Monday with Calvin and I said, what's your message to founders and people who want to be great entrepreneurs?
And he said that when he was just starting Segment, he didn't really appreciate.
And he said that the other founders didn't really appreciate how important it was to be able to tell the story of your company, how it fits into the broader landscape, why it's impactful, all of those great things.
And I think at Cloudflare, we do a very good job of this, where we're always writing blog posts, sharing our vision.
And what is that for 4AllSecure?
What is the narrative that you... Getting past the looks for vulnerabilities and the basics of what the product does, what's the narrative of the bigger picture?
I think for founders, I can tell you, we've been around long enough that some years are better than others.
And when things seem to be going right, it's when everyone in the company is using the product.
And so I think that's a big thing.
And there are some products that you're going to bring to market, you're just naturally using them all the time, like Slack.
If I was creating Slack, I'm also using it in the company.
But security products, there's a lot of scenarios where it's not applicable inside, but you still are providing that feature to the customer.
So you've got to figure out a way to have it so everyone is using it and figuring out that scoreboard.
So I think that would be one just culture thing that I found, is being able to dogfood is so important.
Yeah, for sure. We're big dogfooders at Cloudflare.
We use all of our own products. Cloudflare Access is a big one and Cloudflare for Teams.
That's a big one that as we brought this to market years ago, it wasn't great.
And it was only through using it and forcing us to use it as part of our security model that it got really, really secure and good and rough edges ironed out.
So it's a huge benefit, dogfooding. And I think it's hard with security because you look at some of the security product companies, like Slack, you use it every day.
It's in the hot path of every communication between you and your peers.
But Mayhem, for example, I feel like using your product or using some of Cloudflare's products, it takes effort to have to go use it.
It does. And it's the sort of thing that you...
I guess another message, like things like culture, you always have to have a culture, but it's going to change.
We had the research culture, now we have product.
But every week we actually do a leadership roundtable and every week, or I'm sorry, every two weeks, engineering needs to come to me with five new open source projects that they've ran through Mayhem.
And what's kind of interesting is we can use that for things like, hey, we found a new zero day and so on, but actually just the fact that they're using it is the most important thing to me.
You could take that result and throw it away and it would still be beneficial because if there's some feature on Mayhem that's annoying, they'll fix it.
Wow. So you have a group of people at 4lsecure who's constantly just going, picking open source projects off of GitHub and just fuzzing them and making it rain shells.
For the past 51 weeks in engineering, we've added five new ones per week.
So we have this as an internal dataset. What's funny is we're not even doing a good job commercializing this.
We have all these vulnerabilities and we've never reported them because we don't have bandwidth.
Because when I set this up, I wanted to set it up as us actually using it.
That was my primary principle.
And every time I asked someone to go file a CDE and then there's three weeks of working through the CDE process, which is annoying.
So we'll do it at some point, we'll scale to that.
We have this huge internal database. That is awesome. I'm sure that you have found really, really interesting things.
I know that I've, I mean, just saying it, I've always been, I think one of the issues that really fascinated me years ago was the file issue where file contained an entire ELF parser in it.
And with AFL, I think it was the author of AFL actually found this and reported it.
And I've just felt like the only barrier between all of these really interesting bugs and finding them and exploiting them was just a little bit of effort, like fuzzing and spending time actually looking at these things.
Yeah. We call it the ingestion problem.
So we're not a network tool where you just pointed at a URL.
You have to get the software, get it in a Docker container and then give it to me.
But once you do that, it's automatic. But like I gave a talk to the European Broadcast Union earlier this year and the night before I just went to their GitHub repository, found a software project, found a new zero day.
And that was very vague.
Like I didn't say like, you know, in this thing, since it was recorded exactly here was the bug, but we're getting to that point actually with mayhem.
And it also, I think speaks a little bit to a different mindset I have, at least in the company, there's a little bit of a law of large numbers and security where when you talk to a vendor, they're like, here's one project, go find a new zero day.
That's not actually representative of the attacker. It's like, let me go to your entire GitHub repo or let me look at all the software.
I just have to find one.
And if I can find it with high confidence, that's probably better than giving you a million warnings on that one piece you gave me.
So what kind of going after the volume, the law of large numbers here?
Definitely. Oh, I just had something and I forgot it about some of these.
Oh, CISOs, I have a note here about CISOs.
And I'm curious, like, not exactly what we were talking about before we started on air, but I'm curious, like, how do you tell, I think every CISO, when you sit down and tell the narrative of For All Secure and tell them about the value, they get it.
But like, how do you get fuzzing and this type of software assurance and this, these kinds of extra analysis of software to be one of CISO's like top three items that they want to make sure that they're doing?
Because on every CISO's list, it's like identity and access management and like asset management or endpoint.
Like there's a top five that anybody could kind of say from memory once you've been on a security team for, or in the industry for a few years, but like, what's going to get fuzzing to that level?
I mean, that's part of the puzzle of our company.
So I think it's, it's something that everyone should be doing.
And I think it really, it's on us to make it as easy as possible. When I'm talking to CISO's, I think there's some CISO's that we're just not going to help.
Right.
So the part of the qualification actually is to be able to say no to people.
So where we find it's on the top list or something they should add is when availability, you know, if you think of the CIA triangle, confidential integrity and availability is super important to that organization.
So everything that I, all of our big customers that are public, like whether it's missile defense agency or Roblox or Cloudflare, if you think about these companies, the reason they're running mayhem is because downtime is a big deal to them.
If all you're worried about is some hacker coming in and stealing encrypted usernames and passwords, that's a big deal.
You should solve that problem, but that's more of an IT hygiene issue.
And so where we have the best traction, at least is, is companies when they're actually doing development and that development is core to the business.
And what we see is more and more companies are becoming this, right? Like who would have thought Caterpillar who makes farming equipment or John Deere, but they really are software companies now.
Yeah. So we're riding that way, but I think that's how I go in and I talk to CISO's.
It's like, we're not going to go in and help you protect credit card numbers, but you know, if you're going to get called into a room, when your service goes down, we're going to help you with that.
I think that's a really astute observation. Like, yeah, that is the types of people who, who care availability people who care about availability, right there on software.
And, and yeah, that makes sense. I've always thought about, well, anyone writing code without memory safety, but that's also not people who care about availability.
So like all anybody writing C++ out there, but and C. The other, the other place that, well, I mean, Go and Rust actually have fuzzing built in and they're type safe.
So it's not just for unsafe. The other place that we found a lot of traction is I think because there's so much snake oil, security has often burned their bridges with dev teams.
Yep. And so we constantly hear this of, I rolled out SAST, my developers hated it.
They are, you know, it's kind of like when you ground your kids and you find out they're in the room on their phone watching YouTube when they're not supposed to, right.
You haven't really done it.
Passive resistance, I guess, is what it calls it. Passive, you know, resistance.
I don't know people who actually want to build bridges with dev.
I think this also helps. Cause it's, we're almost out of time. We have about 10 seconds left, but I really want to thank you.
I thought this was an awesome overview about the company and about security in general.
So thank you, David.
I really appreciate it. Thanks.