Signals from the front lines: Moving from recovery to autonomic resilience
발표자: Khalid Kark, Jerry Perullo, Sounil Yu, Mario Duarte
In this episode, host and Cloudflare Field CIO Khalid Kark sits down with a powerhouse panel of CISOs and founders - Sounil Yu, Jerry Perullo, and Mario Duarte - to discuss a paradigm shift in cybersecurity: the move from traditional "backup-and-recovery" resilience to autonomic resilience.
As AI-enabled threats become more relentless, the panel argues that human-centric security operations must evolve. They explore the critical fault lines facing the C-suite today, from the challenges of autonomous decision-making and AI governance to the complexity of managing "nth-party" risk in a hyper-connected ecosystem. Whether discussing the OODA loop of AI agents or the reality of shadow AI in the enterprise, this conversation provides a blueprint for how leaders can stay ahead of the curve.
Khalid Kark is a globally recognized technology strategist and Field CIO at Cloudflare, where he works closely with C-suite leaders and board members to shape secure, scalable, and resilient digital strategies. With over two decades of experience at the forefront of technology leadership, Khalid helps organizations navigate the complex intersection of business innovation, cybersecurity, and enterprise transformation. Previously, Khalid led Forrester’s Security & Risk and Technology Leadership practices and served as Global Managing Director of Deloitte’s Technology Leadership Program and chaired Deloitte’s Tech Eminence Council to elevate thought leadership in AI, cybersecurity, and digital innovation.
Mario Duarte has 20+ years of experience as a security professional working in the tech, retail, health care, and financial sectors. He has built and managed security teams and developed and implemented security programs for private and public organizations. He serves as an advisory board member at several cybersecurity companies as well as an investor for early stage startups in the cybersecurity space.
Sounil Yu is the author and creator of the Cyber Defense Matrix and the DIE Triad. He is currently the cofounder of Knostic, an AI safety startup. Previously, he was the CISO at JupiterOne, CISO-in-Residence at YL Ventures, and Chief Security Scientist at Bank of America. He's a Board Member of the CMMC Cyber Accreditation Body; co-chair of Art into Science: A Conference on Defense; fellow at GMU Scalia Law School's National Security Institute; guest lecturer at Carnegie Mellon; and advisor to many startups.
Jerry Perullo, Founder of Adversarial Risk Management, is a cybersecurity expert and former CISO of the NYSE and Intercontinental Exchange (NYSE: ICE). He led ICE's cybersecurity program for 20 years, overseeing significant growth and M&A integration. Perullo is NACD Directorship Certified® and teaches Enterprise Cybersecurity Management at Georgia Tech.
English
대본 (베타)
I would say what keeps you up at night? CISO, maybe really direct. I wouldn't do it in front of the rest of the board.
I wouldn't do it in front of the board. I'd take you out for drinks and then we'd talk about it.
Well, welcome to the Signals podcast.
Today we have a really special treat. Our three members of the adversarial podcast and they're all of course CISOs and founders of companies and it's a pleasure to host all three of you.
Welcome to the Signals podcast.
Thanks for having us. Thank you. This podcast is based off of a report that we just launched.
It's called, fittingly, Signals Report. And the goal for us is every year, there's probably 50 reports that get launched at RSA.
Everyone and their brother has a report.
And all of those reports are typically directed to CISOs in terms of what they need to do, how they need to do differently, a lot of the threat reports, a lot of the cybersecurity reports.
The way we kind of differentiate this Signals Report is this is meant to be a conversation across the C -suite.
So a C-suite leader, a CFO, a CEO could pick up the report and talk to a CISO about it or vice versa.
A CISO could actually use a lot of the material from the report to have a C-suite conversation around topics that matter to companies and to businesses.
So we'll go through some of what we call the fault lines and would love to get all of you to weigh in and provide your experience in terms of what those fault lines mean as you're interacting with cybersecurity professionals as well as industry out there.
We'll start with the first one, AI governance.
It's an interesting topic that I think we all are grappling with in many ways.
I feel that AI has just sped up a lot of what we did previously in the last decade in terms of cybersecurity.
So the first question is, as organizations strive for autonomous security, because again, with AI, threats are increasing, how do you think about really autonomous security and what does that mean to organizations today?
I'm a definitions person, so when you say autonomous, there's a couple different levels of autonomy.
You're also the founder of an AI security company. That too, yes.
But when we talk about autonomy, one of the things we're really talking about is autonomous decision -making.
We're granting a system to be able to make decisions in terms of coming up with new and clever playbooks, and those playbooks may not necessarily be aligned with what we intend in a certain action.
If a machine takes an action on something, it may be divert from our original intent.
So I think we in security, but just the business in general, need to get comfortable with an AI system making such decisions with potentially negative consequences.
And that's not just for security. That's just for anything in the business itself.
And I'm not sure if one of the biggest challenges I think we'll see is the accountability of, if a system makes a wrong decision, who's going to be held accountable?
The system owner, the person that allowed this to proceed?
Who knows? And I think many organizations are wrestling with that right now.
And, I mean, to your point, we just saw this week OpenClaw apparently deleting a bunch of emails, right?
Was that it? That's all? I think one of those challenges are that we're just drowning in data.
We've had this challenge since probably the mid -teens from last decade.
This is the former Snowflake guy.
Yeah, former CEO of Snowflake, that's right. So we are drowning in data, and we've hired a lot of folks who were employed to go look at this data and make sense of it.
And now we're going to rely on AI agents to do that on our behalf because, one, the human beings can't keep up with it.
The problem is just trusting these LLMs blindly and faithfully.
And when I think of autonomous, I think of trusting them blindly and just looking at their results and taking it for the word.
I think that's a problem. That's a big problem. Yeah, I think about this all the time.
And anything that you automate, pre-AI is just repeating the same process.
So if you have a bad process, you're just going to have a really bad process.
And one thing that we've identified and observed over and over again is that if you take an LLM, I mean, they're really good at language and at putting together articulate sentences, but if you take their viewpoint on cyber risk management, it's trained by vendor FUD for 25 years.
And all it does is make a panic out of everything really fast and at high volume.
And you actually have to carve that part out to make it more deterministic and to actually have some kind of rubric of how we're going to do risk management.
You've got to define that in-house first.
And then AI can come along and feed from it, but do not stick it on just the public Internet.
My favorite exercise to do to an AI agent is to say, are you sure about this answer?
And they're like, oh, actually, you're right. Let me think about that again.
Now you know where you're at, right? One other thing, I think, going back to definitions, there is autonomous that has some aspect of decision-making, but there's another word that may be more appropriate, which is autonomic.
So if you think about how we breathe, most of our breathing is autonomic.
It just happens naturally, and we don't have to even put conscious thought into deciding whether I want to breathe or not.
And I think when it comes to certain things in security, we want more things to actually be autonomic that requires no real decision-making whatsoever because it is just a thing to do.
So I can promise I did not pace Aneel because the report subtitle is Autonomic Resilience.
Oh, there you go. So absolutely, that is the real intent. I think going forward, we are hoping and expecting that it becomes natural in terms of the reaction and the autonomy that we build.
Now the question is, I mean, this whole notion of human in the loop, what does that really mean to all of you as you think about how does the human get in the loop?
I know we all mentioned the fact that we can't rely on LLMs alone to make those decisions, and sometimes the training data may be weird, and sometimes it may be kind of decisions that are kind of iffy.
So how does the human become part of the loop, and what does that really mean?
I think any company that is doing any AI anything and any AI decision-making on behalf of the human needs to demonstrate on how they arrive to that answer.
So tell me, exactly, show me what you did. Tell me how you came about this. And the human in the loop can then look at this and say, wow, you really went and grabbed information that wasn't even available to you.
How come? Well, so let's look at a mental model.
The mental model is the OODA loop, and the OODA loop stands for Observe, Orient, Decide, Act.
When we think about the human in the loop, the human can be in between each of those different steps, the four steps, or the human can be among the different loops.
So you have a system go through all four stages, and then before it takes the next, from the action to the next sensing, you have the opportunity, or from the action to the next observing, you have the opportunity to intervene.
And so the question of human in the loop here is, where does it make the most sense for a human to be in the loop with respect to autonomous systems and especially agentic AI?
Right now, the human in the loop with every loop, well, it's too onerous right now for the human to be in between parts of the OODA loop, but it's right now even onerous for the human to be in the loop among the OODA loops, because a great example is like Cloud Code or any of the coding agents.
It's really painful to keep saying, approve, approve, approve, approve.
At some point, you just say, just do it, and go dangerously to get permissions, and you're essentially in YOLO mode, you only live once mode, where you've chosen to take yourself out of the loop.
But because these systems don't have, the consequence model that we use in security doesn't apply to agents.
They just don't understand consequences, and that's why it deletes all your email, that's why it deletes your hard drive.
And so this understanding of consequence, or our understanding of consequence, we need to find a way to say, okay, when a decision or action is consequential, in my definition of consequential, then I actually want the machine to stop and give me a chance to intervene.
And that's really where I think we're starting to move into where the human's going to be in the loop, because right now, to be in the loop with every loop is just too much noise for the human.
Yeah, it is very loopy. So I know, Jerry, you've been a CISO, and you've been part of reporting to the board and working with the boards and so on.
What do you tell the board in terms of governance? I mean, of course, they're responsible for ultimately governance, but how do you articulate this complexity to the board?
The question writ large is about cyber risk management, and I think this is a slightly different answer than the AI questions.
We just came out of the AI topics, and I'll tell you, right now, the board is walking into the room asking everyone how they're leveraging AI, how much they're using AI.
And then at one point when they come up for air and someone says, oh, by the way, this is Mario, our CISO.
We wanted you to meet. Oh, okay. Mario, how are you using AI? And people are leaving the boardroom thinking, man, we need to show that we're using AI first.
And then somewhere on the sidewalk outside of the nice hotel, the CISO's saying, you know, it's kind of risky.
And then the C-suite's saying, oh, yeah, yeah, yeah.
By the way, make sure you secure it. It is a deep, far afterthought. Companies are, I mean, this is no surprise to most of the listeners.
Companies are generating metrics, performance metrics on bonuses and retention on who's using cloud the most.
And this isn't just... It's real. I mean, it's happening. Or you're fired. You get the message.
I'll give you another analogy, though, I think that is perhaps more unnerving, should be more unnerving to the board.
So this is... Imagine the outsourcing scenario from many years ago.
We have found a way to get cheaper labor. And our employees right now are too expensive.
So I need for you to train up the outsourced labor.
And you know you're training up your replacement. And so what sort of dynamic does that create?
I don't know if many organizations really did that well in the early days, because I think people wisened up and said, you're asking me to train my replacement.
Okay, now every organization's saying, go use AI, but I want you to use the corporate AI tools, because what we need for you to do is to tell our AI systems how you're doing your job so that I don't need you anymore.
And I think people are starting to wisen up to that sort of notion.
There is going to be a motivation for people to use their own personal AI systems and just produce the outputs that the company's looking for.
But I keep my own workflows and my own ways of doing things.
Boards really need to understand when we talk about AI governance, it is not too different than the outsourcing model that we had in the past at a much greater scale where every job is now outsourceable.
And you're now asking your employees to train up some much cheaper offshore labor.
And that is not going to bode well if...
Well, once people realize that essentially the game's going to be up and people are going to resist wanting to use the tools or hide their use of tools and it becomes a whole different type of shadow AI.
I was talking to a CISO who said that in the last six months, at least three times, he's received a text message from the CEO saying, how can we take that security layer out because it's hindering the productivity through AI of a business area, right?
And so as a CISO, how do you respond to that? What do you do? I mean, but that's natural.
I mean, that's happened to us for every decade, every year we have the same problem.
It just so happens to happen to be exposed to a greater audience now, right?
Like you can see this naturally happening with SaaS applications, with cloud infrastructures.
When AWS initially back in the 2000s, AWS, not everybody was embracing it, right?
And people pushed back on it. But once the floodgates opened, you started noticing people not understanding how to use the cloud infrastructure or the SaaS applications as well as, see, there was always this misconception that, a misunderstanding that just because it ran on-prem that you were protected, your mistakes were okay because they weren't exposed to the public Internet.
With AI, it's even worse now, right? Like there's nothing.
It's been a facade all along. I think that at the end of the day, it is a decision that we have to articulate the risks in and say, you know what?
At the end, it's a business decision.
You want to do business in China? Of course think it's risky.
Yeah, but I'll say that. I mean, a lot of CISOs, if you wind way back to Internet, right, there was a lot of CISOs saying we're not going to be connected to the Internet.
And then you fast forward to 2007 and they were the ones that weren't on AWS and all that, and they're all gone.
I mean, the successful CISOs were always the ones who were saying, hey, have you heard about this Internet?
It's awesome. Not just tech forward, innovation forward.
Cloud, we should do that. Here's why. Oh, security?
Yeah, I've been thinking about that too. They're business leaders. They're risk managers at the end of the day.
And that's what's going to have to happen on this little AI generation we're dealing with.
So are you all suggesting that CISOs should be front and center of innovation and figure out the security on the back end?
Or again, should they be pushing back and pushing harder to say, let's build in this governance as part of whatever we're doing?
It may be slower, but it's the right way to approach it.
I think the path to yes is we actually have to be tinkering with the technology itself.
It's like a vehicle, right? A car. If every department in a company is a wheel and every wheel except security is round, you're the square wheel.
The CEO who's a driver, he's going to pull over and replace that square wheel with a wheel that's actually round.
Sure, sure. That's fair.
That's fair. And if you train AI security on the old LLMs, then you'll fire the CISO of no and replace it with some automated no.
But what's really going to happen is you're going to just take the star examples, the yes CISOs, and that's what's going to end up being automated.
I think this will get behind us. Let's move on to a slightly different topic.
So Sunil mentioned this notion of autonomic and the fact that this report is called Autonomic Resilience for a reason because ultimately, what we're saying is resilience in the traditional sense of what we believe resilience to be for a long time was the ability to get back up and get back up quickly.
And we measured that as a way to be resilient. To me, I think now you've got to almost assume that you're going to have some kind of a disruption, some kind of a breach, and not just the fact that you're going to get up, but you're going to get stronger as a result of it.
You're going to build in some of the learnings from that to be a lot stronger.
And so that's kind of that notion.
It needs to be built automatically into the system. So where do you think CISOs are right now in terms of their thinking on resilience and their ability to build in those systems that allow them to have this new definition of resilience, which is really important based on where we are today.
We had this conversation for lunch, Jerry and I, and it really depends on when in the company's life does a CISO come in.
Is this already a brownfield or greenfield? Because if I would argue it's greenfield, in some situations like my case, right?
In a new startup, we're doing it from day one.
And the idea is it doesn't matter if there's a bad library that's infected.
I'll kill it immediately. I know where it's running from and what systems are running, and I'll just replicate the environment really quickly and come back within minutes.
The entire AWS deployment or Azure deployment or GCP deployment, including all the libraries.
But if you're in brownfield, which is most of the case, that's difficult.
That's where you might want to partner up with somebody who can do that.
Not to plug Cloudflare, but Cloudflare has been a leader in this space, quite honestly.
I introduced a mental model called the D.I .E.
triad. Of course, we know the CIA triad. The D.I.E. triad stands for distributed, immutable, and ephemeral.
But D.I.E. also means something as well. And the principles of the D.I.E.
triad were really a way to think about resilience in the sense of how do we repave, how do we treat things as cattle and not like pets.
Pets are brownfield, and once you have a pet, it will always be a pet, and it will always remain a pet until it dies its own natural death.
But to your point earlier, the D.I.E.
doesn't necessarily, or that philosophy doesn't actually move the needle in terms of becoming better.
So what you're talking about is another principle called anti-fragile.
And anti -fragility, originally termed by Nassim Taleb, I think pushes us to say, how do we make our system better through things like chaos engineering?
And so I think what Netflix pioneered with chaos engineering is really about introducing mechanisms to see whether or not there are opportunities to find those things that you thought were cattle but are actually pets, and how do we identify those early so that they don't become, so that you have a chance to actually decommission that pet before it becomes a legacy asset that now you can't do anything about.
So Neil, always will infect your mind. I was telling this to my employees last week.
Is it a cattle or is it a pet? Pets, what are we talking about here?
I love that analogy, though. I think we should have this conversation with many of the CIOs and CISOs to say, is it a cattle, is it a pet, right?
I mean, I think that's a great analogy to kind of describe.
Or pest, yes. Yeah, you know, I'll say when I was a CISO, and I was the chairman of FSISAC, right, so the top financial org cyber groups and CISOs, so I spoke to a lot of them, and there were these ways where you'd hear a buzzword come along, and I wasn't a Gartner subscriber or a Forrester subscriber, so I was in the dark all the time.
I had no idea what people were talking about all the time.
And resilience was the hot one about 12 years ago.
Out of the blue, everybody who walked in any meeting is all the CISOs would talk about.
Right before that, all we talked about is how security wasn't an IT problem.
Security shouldn't be under the CIO. That's something different.
And then all of a sudden, and resilience came to the forefront because of ransomware, and it meant rebuilding Windows Active Directory when it got wiped out.
And CISOs wouldn't be caught dead anywhere around Active Directory or systems engineering or any of that.
So all of a sudden, we were the champions of rebuilding AD when we refused to even acknowledge the presence of AD.
And I think to this point, the majority of rebuilding from destructive attacks is in the domain of SREs, of engineers.
Security should step back and stop trying to take the glory there because security is not going to rebuild anything.
But where we can benefit from the whole idea is through red teaming.
And not about how do we rebuild everything, but how do we find the little gaps and the, I thought we blocked any lookups out of production, and we didn't allow C2, and we didn't allow DNS tunneling.
That's the stuff you find through red teaming. And it doesn't mean resiliency in the classic sense of rebuilding the building, but it does mean, you know, improving continuously on your controls.
One of the things that I feel is important for, conceptually, for a lot of CISOs to understand is we recently had a few disruptions from one of the, or many of the hyperscalers and so on is the notion of decoupling.
And the fact that we're so dependent on all of these kind of things that are interconnected with each other is that from an architecture perspective, just looking at your architecture and trying to figure out how to decouple some of the risk, is that something you're seeing companies do, CISOs do?
Is that something that's important in the notion of resilience to kind of consider?
Yeah, so the challenge of decoupling, of course, is the interfaces between the things that need to be coupled back together.
And what I found, there was a story that I saw recently that I thought was fascinating.
Someone was running some service on Vercel and they were, they said, you know what, we know moving to Cloudflare, and this is really a commercial for Cloudflare, okay, so end full disclosure.
We're big fans of Cloudflare. They said, you know, moving to Cloudflare is the logical choice, but the interface is too difficult for us to navigate.
So they pointed Cloud Code at it and said, hey, Cloud Code, here's our Vercel app, move it over to Cloudflare, and voila, somehow they made it, and it worked, okay, with some small tweaking, but they made it work.
So in other words, the point is that the decoupling between whatever service that they're having in Vercel, they were able to decouple it and recouple it to anything else, okay?
It doesn't have to be Cloudflare.
So I think the challenge that we've had in the past of building our own, resiliency is not the job of Cloudflare.
If I'm an app owner, it's my job to make sure that regardless of whether it's Cloudflare or US East One or whatever goes down, that my application is still up and serving, right?
But the recoupling is the hard part.
Enter in Cloud Code and these other tools, and all of a sudden, what was hard is much easier again.
The new middleware. Well, you know, I'll give an anecdote from financial services.
We did a lot of systemic risk. You were involved in that in your Bank of America days as well, Sunil.
And we talked about contagion risk.
That came out of a non-technologist cabal of legislators. And we ended up in these conversations of like, well, we're not going to catch malware from clearing trades through the clearing house.
What are you talking about?
You don't get it here. And it was a lot of diplomatic work to fight that wave because it would have really stymied growth.
So things can get out of hand really quick.
And ultimately, I remember writing this whole coordinated paper with all the clearing house and exchange CISOs about unassailable protocols and why some things were okay and we didn't have to worry about them.
And we had to do that and stay ahead of regulation really becoming stifling.
But when it comes to risks that are like fungible services and they're really wide open and CDNs are certainly in there and cloud service providers, which Cloudflare is all the above right now, I can tell you anecdotally when we as a startup went through our third-party risk management, identified our vendors, a bunch of them were the big names.
And on those, the mitigation strategy was shared fate. And the idea was, if Cloudflare's down, no one's really coming for an interview for us right now.
And that sounds kind of silly, but there's something to that. That said, I think it's really interesting philosophically for a company like Cloudflare where as long as there's some kind of parity from your competitors, then the big enterprises will say, cool, we're going to distribute.
We're going to have resiliency because we're going to use Lambda and we're going to use workers and so we can fail between it.
But the minute you guys innovate too much, like that's great, but I don't want to quite adopt that cool new feature because I can't fail out of it.
And that's going to be an interesting backroom conversation. Well, I mean, I think the other thing we're finding is outside of financial services and maybe a couple of other industries, a lot of CISOs don't want to add that complexity of decoupling because they would rather take the risk and kind of have that downtime if they needed to because the added complexity means additional resources, additional skills.
Again, the coupling and the decoupling and the recoupling, all of that is something that's hard for a lot of average cybersecurity professionals.
You couldn't pay me enough to be doing BCP or DR. Honestly, I would like, in any job interview, no, I'm not doing that.
Just forget it.
I'll throw in there, I think we're all risk managers when we're in a boardroom, right?
Everybody's a risk manager. And I think good risk managers are known for the risks that they mitigate or avoid.
And great risk managers are known for the ones that they take.
And risk balancing is an equation and you just kind of dance around that.
Maybe sometimes the math comes out and says, let's just roll the dice here.
Let's go. And if you portray it like that, people will think you're nuts.
But if you show the math and you say, yeah, it's worth being monolithic.
It's worth being a single source. When Cloudflare is down, I always used to say there will be blood in the streets.
And everybody's impacted. People got it.
That's your point, right? Everybody else is impacted as well. As long as you're not the one left standing there, you're the only one, you're fine.
All right, there we go.
Okay. You mentioned third parties, Jerry, and I think that's an aspect that a lot of companies are aware of and think about.
But there's not just third parties.
There's fourth. There's fifth. There's sixth parties or whatever else.
And it becomes really complex really quickly. How do you advise CISOs and cybersecurity professionals think about this nth party risk as a way to either articulate and to, of course, mitigate this in the complex ecosystem that we live in today?
Well, I mean, I think entities, and that's writ large from Java libraries to suppliers, aren't bad for who they are.
They're bad for what they do, right?
And we just try to paint everything with a broad brush a lot of times.
But we need to analyze the relationship of a vendor. So it's not, oh, I'm using this company and I pay them $2 million a year and so therefore that's the risk.
It needs to be I'm using them and I'm giving them my data or I'm inviting them to come in or their employees are being seconded to my shops or I'm relying on them and the minute they go down.
Those are all different risks that have different threats behind them and they should have different questions that come out of that.
So third party risk management should be about, well, I've analyzed the relationship, the statement of work in many cases.
And based on that, here's what I'm worried about happening.
So here's some tactical things. Maybe there's only 10 of them instead of the 300 questions we hit everybody with.
And time and time again when I talk with people about that, they say, yeah, exactly.
It's just theater right now.
It's a mess. So whether, and you know, you rely on a third party and the fourth and fifth are under them.
It's all the scenario you're worried about.
I'm worried about, you're going to have my data. Where's it going to go? I'm worried about your data leakage and then it might get into their background checks and their TPRM for data and so on.
And we generally take a view or I take a view at least that automating third party risk management with AI is just automating a bad process.
And I think ultimately, I don't know if we will solve this problem through the SaaSpocalypse or something like it, but it does beg the question.
Eventually we want to get to the point where we could say, what third party?
It's all first party.
It can become, we now have the opportunity to make as much of it first party.
Threat intelligence. One of the things that we talk about in the report is that it's been reactive for most part.
I mean, the way we do threat intelligence is we get all these feeds and then we check the box to say, okay, where are we with respect to this threat, right?
And so, of course, we need to move away from that.
Again, reports suggest, and again, our data also suggests that the exploits are becoming closer and closer to the actual time of the exploit versus the vulnerability being released, et cetera.
And again, 22 minutes and there's a lot of these numbers being thrown around.
The question is, how do you reimagine threat intelligence?
What does that mean in the context of AI where it's taking minutes and on average a company is going to take five weeks to mitigate all the vulnerability, on average a vulnerability.
So how do you manage threat intelligence? So let's consider the first thing.
There's going to be a lot more threat intelligence than we've ever seen before.
So the better approach is really something that we've struggled with for a while, which is let's start with a threat model.
If I live in Atlanta or the East Coast, my threat model includes hurricanes.
However, if I live in Alaska, I don't think so.
So if I see in threat intelligence about a hurricane and I live in Alaska, why do I care?
And so I think in the context of having a threat model, which has up until recently been difficult, the threat model is really the first filter.
And now that we do have AI systems that can help us generate a threat model, one that at least is decently good enough, that actually should serve as an initial filter for all the threat intelligence that we see.
And why does an attacker want to attack you specifically or your industry, right?
Is there money involved in it or is it state espionage?
Whatever that is, you've got to figure out what you're playing, which is basically what Sunil is talking about.
In our case, one of the things we looked at threat intelligence was what are people paying for secrets, users and password combinations in the dark web, how much are they paying for that for us, right?
And so you start figuring out and you start playing a market game here.
If your pricing goes up for those username and passwords for your particular, whatever application, whatever company you are, then you know somebody's actually interested in the information you have and then you're like, holy moly, now I'm getting more and more targeted.
Yes. Yeah. I think on all this, I mean one cool little optimistic anecdote here.
So the compression of time from not just a vulnerability being known about but even just a patch being issued to getting it reverse engineered and then exploited, as you said, has gone down.
But what's really cool is that at about the same time when the adversaries have weaponized something, so have the bug bounty researchers, which is really important because the way that it works right now is a tweet comes out, you're at some big Fortune 50 company and you're just asking questions of subsidiaries in countries you've never been to.
Hey, do we use, you know, mario .jar or whatever and you just start there.
And if you continue on that trajectory, you'll figure it all out in about six years, right?
But when the bug bounty comes in and says, here are these 14 things I've compromised right now, they do all your work for you.
And the adversaries do it kind of quick too except that they're picking 14 targets out of the whole world.
So your statistical likelihood of actually finding it before it's exploited is pretty good as long as you're paying these bug bounty researchers.
But back on the broader topic, right, you have strategic, operational, and tactical intel and the feeds being tactical.
I think everybody's kind of done with that. It was kind of a fad.
And what we need to be good at is really automating the strategic intel processing.
I think what Sunil described with using, and we call it a threat profile because threat model got waylaid into AppSec so much and like data flows.
But a threat profile of a company is a collection priority in the old CTI parlance, right?
It's a collection filter to say not just where I am because people could do that or am I worried about this nation state or another?
Those are all the wrong way to do it.
It's more about the objective of the adversary. What do they want out of it?
And everybody thinks that their IP is super valuable. You need someone to say, grandma not going on the record, but it's not.
Grandma and grandpa's information may not be that.
Well, I mean, here's what we're seeing. We're seeing industries being targeted specifically based on the systems and the vulnerabilities those systems have.
So, for example, airlines are being targeted right now specifically based off of the systems that they have.
And it's not just one airline.
If you do it for one, you can actually replicate it pretty easily for others and so on.
And so I think that that threat modeling has an industry component to it.
And of course, there's an individual company component to it as well.
In addition to it, that's going to kind of further enhance. Okay, let's move gears.
One last question before we move on to rapid fire questions. If you are based on kind of all of your individual expertise, if you're a board member, imagine yourself being a board member.
What is one question you want to ask of your CISO that would give you a fairly decent understanding of where the organization is in terms of resilience?
For me, it would be what are we worried about, which is kind of what we've talked about repeatedly here.
And what I'd be listening for is are they going to say, well, there's a hack every five seconds in the world and everyone wants all of our data and people are trying to steal the identities and say, okay, this person's been trained on the LLM.
What I want to hear is people really worry about intellectual property.
Honestly, you're the boss and you know better, but I think adversaries aren't really trying to get to our algorithms here.
But what I am worried about is a bit of fraud. We lost $50,000 in a wire last quarter and we could have that happen 10 times really fast.
I'm worried about transactional fraud affecting our customers because it'll drive them away.
So I want to hear that business echo, but the question is, what are we worried about?
I would say what keeps you up at night for a CISO. Maybe really direct.
I wouldn't do it in front of the rest of the board. I wouldn't do it in front of the board.
I'd take you out for drinks and then we'd talk about it. So what's the good answer to that?
What's the bad answer? Well, I mean, you'd want to know, well, you want to know how are you dealing with it and are you properly funded?
So this is your opportunity, CISOs, to actually say, well, you know, this is what I'm worried about and it's what we need.
So I created the measurable baseline.
It's essentially what I call my pets to cattle ratio. And it is something that is actually measurable across every organization.
And there's a certain value, mathematical value, that tells you how resilient you actually are.
And so it requires a little math, but if I were a board member, and I am actually for a couple organizations, but I don't think necessarily they would understand how to do it.
But that said, the ability to calculate this is actually a measurable quantity if you can determine how many pets do you have, how many cattle do you have, and what's the ratio.
One thing when we talk to board members, they tell us is that one thing they're sick of is cybersecurity leaders coming up with a different metric to measure effectiveness every time, right?
And so it's like every quarter I've got a different thing to understand and adjust and so on.
So if you have a measure like that, that quarter over quarter, you can measure and say, okay, here's where we are on that journey.
Here's what we're doing to get to the next level, etc.
That would definitely help. It's a measurement that can be baseline or benchmarkable across industries as well.
I would be concerned if anybody in a CISO presents a traffic light risk scenario, green, yellow, red.
That would freak me out if I'm a board member.
All right, let's flip this around now. If you're a CISO, how would you articulate what you're doing and move away from the old man yelling at the clouds kind of a notion to saying, how would you articulate effectively the risk posture of your organization?
So I have a little proprietary, you know, dreamy metric too, Sunil.
I call it remediation agility, but binding together all the topics.
The idea is that the reason why CISOs bring in a different metric is because they have vulnerability management teams, which are insane to me.
It's the stupidest thing I've ever heard of. Because if you talk to a board director about vulnerabilities and you give them stats on patching and, ooh, you're just out of time, next quarter you come back, they think you were talking about risk.
But now you're talking about AppSec. Wait, wait, wait, I thought we were talking about risk.
No, no, that was vulnerabilities. Vulnerabilities are risk as far as they're concerned, right?
It's bizarre. So we need to be agnostic to not only tools, but also to classes like that.
Configuration errors, bad procedures, human error, vulnerabilities, all of them are very important, but you don't have to draw any distinctions.
So our remediation agility measures is for all of the above risks that are critical or high, and that's really hard to get agreement on.
You have to have a rubric where everybody agrees.
And it can't be about CVSS scores. It's got to be about getting hacked tonight.
For stuff that can get us hacked tonight, are we closing it within the SLAs?
And what you see when you measure that, when you see the red, that's the stuff that we're going over SLA.
Then the obvious actionable questions from the board are, are the SLAs too tight?
No. Well, then why weren't we able to fix things?
Right? And then quarter after quarter you have the same conversation, but one quarter the answer is code freeze, next quarter it's M&A, the quarter after that it's some kind of major technological upheaval.
So mine's a simpler answer, and I'll just double down on the pets and cattle analogy.
Today many of us are veterinarians, and I think in the future I would rather be a pet control officer.
Pet control officer. I like that term. My job is only to make sure that people are very deliberate when they want to adopt a pet.
All else gets slaughtered.
One pet per family. So is it safe to say you're not a vegetarian, Mr.
Neal? All right. Let's move on to the rapid fire questions. Yes. So I'll ask the question, and each one of you, depending on your context, answer that, and we'll start with Mario, we'll go to Jerry, and we'll go to Sunil after, and we'll flip around after that.
If you had to describe the current state of cybersecurity in one word, what word would you use?
Mayhem. Frenetic. Irrelevant. Irrelevant?
I would like an explanation. It's kind of what we talked about earlier with the AI motion.
It is, I don't really care what cybersecurity concerns you have. It is an existential issue for us to go and use AI.
Who cares about the security issues?
So it's irrelevant. Wow. Okay. So do you guys agree? No. Well, it's our industry.
It's our jobs. So we're going to, we have a whole conference and industry who wants to fight.
The way I see it is it's our last chance to get it right before it gets hyperscale.
But whatever we do is going to get hyperscale. But at the same time, as Google, there was a whole narrative that Google has around what's called CodeMender, which is we're going to build software so that it is free of the common, well-known vulnerabilities that are baked in.
Jerry, maybe we start the next one with you.
What's the one buzzword in cybersecurity that you wish we could retire?
It's not just one word, but who cares? It's not if, but when. Fatalism. I would say an acronym, TPRM.
All right. Can I change my answer? I would say the basics.
That's a buzzword you want to kill is talking about hygiene and basics being so basic.
That's right. Exactly. What people consider the basics are really actually not the basics.
It's much more harder than the basics. Of course. Yeah, makes sense.
All right. And it keeps changing. The basics keep changing. The basics used to be AV and firewall.
It's not the basics anymore. Sunil, next one for you.
What is the biggest obstacle in cybersecurity today? Complacency? Scale? Complexity?
Or something else? Complexity. Yes, complexity. Much more. I think it's consistency.
Okay. Explain. Just determinism and how people adjudicate risk around cybersecurity.
Doing not only the same thing analyst to analyst, but perhaps the same thing company to company.
Next one. And maybe start with you, Mario. In five years, which is a lifetime now, corporate leaders will need to be fluent in X.
Well, I don't know about what X is, but I would suggest that there will not be a CISO or CIO as we know it.
Oh, man, it's so hard to not say something AI and Mario avoided it.
That was amazing. Yeah. Fluency of the next generation in business leadership.
I would say risk management. I'll make it pretty simple. So my assertion is there's going to be no more individual contributor.
Rather, every person is going to know how to manage a team of agents.
And so, what every corporate leader needs to be conversant in is how do you manage an organization that is a hundred, maybe even a thousand times the size of what you have today.
And non -human.
No, I don't know if I necessarily would have to say it's non-human, but if you run a, to just, you know, the analogy would be if you run a 10,000 person organization, imagine the skills you need to have to run a, let's say a Walmart -sized organization.
We're talking minions. That's what he's talking about. His minions.
How do you manage their minions? But it is a wholly different set of skills.
Yeah, absolutely. Yeah, absolutely. Agreed. Last question. We'll start with you, Mario.
What is the one book, podcast, or show that is surprisingly relevant to how you think about cybersecurity?
Grit. The book. And why do you think that is?
You need skills to be able to manage really tough conversations and situations, and you need to get up the next day and do it all over again and be emotionally stable, which I'm not saying I am.
Not good. The Adversarial Podcast.
Okay. I know it seems kind of easy. Nice plug. But the name is about that.
It's about being, you know, contrarian and testing popularly. Well, I mean, I think to your point, I think it's the notion of learning and then unlearning, right, which is important, which you've got to have both perspectives in view as you're kind of moving forward.
So I read about 50 books a year, so I have a lot to choose from.
If it were around cybersecurity, I'll plug my own book, Cyber Defense Matrix, which hopefully, I don't know if you can get a shot in the background there.
If it's about AI, oh, here it is, yes. Wow. If it's about AI, the book would be Blood and the Machine.
It's a story about the Luddites.
And after you read the book, I think most of us will want to become a neo -Luddite.
And then if it's about risk management, the landmark book that I would recommend everyone read is Thinking, Fast and Slow by Daniel Kahneman.
Well, thank you, gentlemen.
This was a pleasure to have you on. As usual, I know you had a lot of fun.
I did as well. But thank you again for being here and thank you for investing your time.
And hopefully this is a useful conversation for a lot of C-suite leaders that are looking and hearing.
Thank you. Thanks for your time. Thanks for having us.