The risk compound: Why tech debt is your greatest threat
Présenté par : Khalid Kark, Mark Hughes
Première diffusion : 5 mai, de 9:00 à 10:00 UTC−4
In this episode of Security Signal, Cloudflare Field CIO Khalid Kark sits down with Mark Hughes, Global Managing Partner of Cybersecurity Services at IBM, to discuss the shifting paradigm of enterprise risk. As attackers trade traditional playbooks for AI-accelerated strikes, the conversation moves beyond simple threat reports to address the strategic challenges facing the C-suite.
Key Discussion Points
The velocity of AI threats: Mark explains how attackers are using AI not to reinvent the wheel, but to spin it faster. From sophisticated reconnaissance to high-fidelity phishing and rapid malware development, the time-to-attack has shrunk significantly.
Legacy debt as a business risk: The duo dives into technical debt and how legacy infrastructure creates a dual risk: vulnerability to hackers and availability risk (the danger of systems simply failing due to age).
The shift to autonomous security: Mark paints a picture of a future defined by "Thin IT" and autonomous security. He describes a world where AI agents collaborate to remediate threats in real-time, moving from reactive reporting to ambient, proactive defense.
The quantum deadline: With Cloudflare and IBM at the forefront of post-quantum cryptography (PQC), the discussion highlights why crypto-agility is a present-day requirement, not a futuristic goal.
Governance vs. compliance: A critical distinction is made: CISOs must move from the "compliance" subcommittee to the "risk and strategy" committees to be truly effective.
Khalid Kark is a globally recognized technology strategist and Field CIO at Cloudflare, where he works closely with C-suite leaders and board members to shape secure, scalable, and resilient digital strategies. With over two decades of experience at the forefront of technology leadership, Khalid helps organizations navigate the complex intersection of business innovation, cybersecurity, and enterprise transformation. Previously, Khalid led Forrester’s Security & Risk and Technology Leadership practices and served as Global Managing Director of Deloitte’s Technology Leadership Program and chaired Deloitte’s Tech Eminence Council to elevate thought leadership in AI, cybersecurity, and digital innovation.
Mark Hughes, Global Managing Partner, Cybersecurity Services, leads IBM’s team of thousands of experts in helping organizations transform security into a business enabler and establish cyber resiliency. His role spans the sales and services delivery of threat detection and response, data security, cloud security, IAM, infrastructure, risk management, and ecosystem partnerships. Mark’s cybersecurity career spans over two decades, including recent roles as President of Security at DXC Technology, a fortune 500 global technology services provider, and Chief Executive at BT Security, a leading global telecommunications provider. Mark has served on national boards, including the Cyber Growth Partnership for the United Kingdom, and the World Economic Forum’s (WEF) Global Cybersecurity Board.
English
SecuritySignal
Transcription (Bêta)
We have a challenge managing cryptography today, regardless of what's going to happen in this in the post quantum era, I would advise any organization, yes, the quantum situation provides a compelling event when there is going to be vulnerability in cryptography that doesn't exist today.
But the reality in terms of actually managing cryptography, which underpins what you do about when that happens, is here and now.
Hello and welcome to the Security Signal podcast for Cloudflare.
I'm joined here today with Mark Hughes, who's the Global Managing Director, Cyber Security Services at IBM.
Welcome Mark. Thank you. Thank you Khalid. It's a podcast that's meant to be a conversation across the C-suite.
One of the things that we've found is a lot of the reports, the threat reports, the cyber security reports are directed to the cyber security leaders.
We want to make this conversation a lot more about the broader C-suite and the issues that matter to them.
We're going to delve deeper into this notion of legacy debt.
And I know in your role, you of course deal with a lot of traditional organizations, large complex global entities that have a lot of legacy infrastructure and capabilities.
And it feels like it's a lot harder to deal with it for them compared to a digital native company.
One of the things that we're seeing from our perspective is that attackers aren't really reinventing the playbooks.
They're just accelerating the traditional attacks through AI. And so maybe we want to start with the first question for you, which is, what are you seeing across the board in terms of companies dealing with legacy infrastructure and environments?
And how is AI impacting the threat landscape for them?
I'm going to start with how AI is impacting the threat landscape. So it's absolutely great to be here.
It is a subject that's very close to my heart. I've been a CISO myself in a large multinational organization where tech debt was really high on the agenda.
In terms of how AI is reshaping, I think, that threat landscape, absolutely it's about using the techniques that are being used now to help speed up.
What I mean by that is fairly straightforward stuff, the reconnaissance activities.
So now really gaining a deeper understanding and deploying AI tooling against targets to actually do that reconnaissance, to do deeper research, to really formulate attack plans in a way in which they could do before.
But now that, as I said, can happen a lot more quickly.
It does a few other things as well in terms of speeding up.
So underneath that, there's a lot more collaboration that's happening as well, we see within those communities.
So they are being able to form now in a way in which they couldn't do before.
Of course, the other thing which is fairly clear is that they have access to coding tools as well and vibe coding and the like.
And so a lot of the malware development based upon that reconnaissance is also now speeding up dramatically as well.
So speeding up is a net result, but there's different techniques that are being used as they deploy AI to be able to then target organizations.
And then there's some fairly simple things which are fairly obvious, we've all pretty well been witness of that, which is phishing emails.
Up until a few months ago, even 80% of phishing emails you could pretty well spot as being phishing emails.
That's now come right down to just a very tiny percentage because they're just so well researched and so contextualized.
So all of these things now are either doing a bit more accuracy, as I said, but certainly more speed.
So when you launch that now against organizations that are essentially running a similar security posture to that which they've been running for many years with all that technical debt, it's tricky.
Things have...
The game has changed, as is often the case in security. I'm sure in your role, you're dealing with a lot of CISOs and their conversations with their rest of the C-suite, et cetera.
How are CISOs articulating this new threat landscape in the context of tech debt, but broadly the broader threat landscape changing?
What are they seeing in terms of... How are they articulating that to their boards, to their rest of the C -suites, et cetera?
It depends. Different organizations have to think about it in different ways.
The way in which I see really the best out there being able to articulate it is quite often security risk more broadly is a subject that is still quite hard for many C-suite executives to get their head around.
It's very fast -moving, lots of changes, as I've already said, in terms of AI and the like.
But it's also something which is by its very nature often quite technical.
But I will say to anyone that I talk to, and from my own experience as well, which is look for adjacent risks.
Most organizations have a landscape of risk management.
Every organization, every mature organization, has governance controls in place to think about risk, whether or not they need to do certain things to become more competitive, for example, or certain things to manage regulation, depending on which sector they're in.
Every organization is going to have some, certainly large organizations, can have some form of risk management.
In the technical space, there's a lot of risk that sits in...
Technology underpins most large enterprises.
So there's a lot of technology risk that's there already, which organizations are pretty used to and well-versed in how to manage.
And for that, end of service life is one of those risks that comes, which is born out of technical debt as well.
So whilst technical debt can pose security risk, it also poses availability risk for an organization, whether it's attacked maliciously or not.
It just fails. What I often say to many CISOs is, look, think about that adjacency.
Think about where there's already a risk from technical debt to do with end of service life and a service interruption, which is very high on most organizations' agenda.
And the interventions then, and the proportionality of the controls that are needed to manage the risk of technical debt from vulnerabilities that have emerged because of that, because of it being out of date, the net outcome of that is often similar to what would happen if pieces of technology which are end of life would fail in any case.
And that gives a very good starting point to then at the C -suite, be able to actually quantify what needs to be done because that's often the big challenge.
In fact, what I often think is, in many areas of security, one of the biggest challenges is that the risk approach is one of downside risk.
Most of what you have to invest in is to stop something from happening and stop something from going wrong.
Conversely, in many other risk areas, the ability to be more competitive, to develop new products, there's obviously risk associated whether they're going to be successful or not.
But the starting point is doing something which is going to increase the top line, is going to drive more business.
Downside risk is inherently much more difficult, just as people sitting around a table, we don't put it in quite the same way.
And when it comes to tech debt, the end of service life risk that comes with that is a downside risk that is well understood, that most CIOs manage extremely well and have a pretty good idea about what they need to do and where they need to invest.
So I say, use that, translate it directly across into the security space.
There are some different considerations, don't get me wrong, but that's a very good starting point because it gives a good sense of how much it could cost if it goes wrong.
We've seen a huge spike and the IBM X-Force data shows 44% spike in the attacks on public -facing applications.
And so that spike is almost forcing CISOs to say, it's not just about risk management in the context, the traditional sense of risk management, it may actually lead to availability issues which may lead to top-line revenue and so on.
And so translating that becomes really important for a CISO. Are you seeing CISOs being able to do that effectively?
How would you rate where CISOs are in terms of being able to kind of make that leap from it's a tech that is good to kind of deal with versus it's almost imperative to deal with it?
Because again, the amount of risk that we're leaving on the table by not addressing it becomes an issue, not just in terms of the bottom line, but top line as well.
It's a great question. And it's one which through our Threat Intelligence Index report, we're seeing that spike.
And the thing is now, we almost see lots of different things happening here.
The most recent report showed that, for example, credentials now, often known credentials, they're not necessarily where attackers actually have to discover those credentials.
They're using known credentials which have perhaps been harvested in other attacks.
And that is, again, back to my point about lots of reconnaissance, lots of detailed reconnaissance allows threat actors to build that type of picture and then target, conduct scanning now at a scale in a way in which they haven't been able to do before using AI tools, which then targets those vulnerabilities.
We're coming to the question though, about how a CISO really thinks about that.
I think there's an opportunity actually, which is to say as often as insecurity, the landscape changes because most of what happens in security is done to you by threat actors.
And so as that changes, that risk equation has moved on and that risk equation has moved on to the extent that now the likelihood has gone up.
The impact is still imperative to understand what that actually is to an organization.
But clearly that equation of those two multiples put together, or those two factors put together, then leads you to a much higher dollar value number.
And being able to get to that and then being able to articulate that alongside, as I said before, those other risks, then really puts you into a position where it becomes very clear that you're now dealing with a risk that became manageable and controllable to one which I often put into the bucket of now out of bound to be able to manage proactively where you then have to take significant systematic steps within the organization to be able to address.
Let me ask you this. On the flip side of it, the boards and the rest of the C-suite leaders are very much pressuring the CIO to deliver value of their AI investments.
And a lot of times that means either pushing the boundaries around risk management or sometimes, again, some CIOs and CEOs are telling their CISOs to reduce the amount of protection to be able to go faster.
So how do you see the rest of the C-suite dealing with that notion of risk management and specifically in terms of AI risk management, what is their stance on CISOs?
Well, it's a great question. There's a really piece of interesting insight here because, of course, one of the things that most of the C-suite want to do and most organizations want to do, they can see that the adoption of AI into their organization can really accelerate not just their business process and re -transform how they operate, but also really transform their interaction with their clients and their customers as well.
But they also understand that adopting those tools into an enterprise comes with risk.
And that is well understood. So there is, and of course, there's some helping hands in regulation and government guidance now that we see and other frameworks like the OS10 and NIST guidance as well around AI adoption and AI security.
So I see that as an opportunity. I see that as an opportunity where the security, the CISO can really play a very significant role now in getting in amongst those governance decisions.
There is an absolute awareness that AI does come with risk and that therefore, as agents get deployed and data is ingested into the models, that that has to be controlled.
And data security, access around agents and Zero Trust principles, they are all well known and well understood.
So really as security people, then we have an opportunity to really accelerate a lot of those principles in that C-suite discussion and say, hey, look, these things that we need to do, we really do need to do them.
We really need to build them in ground up because that will really now unlock the benefit of that AI deployment.
And I think that is really very prevalent in most of the clients that I talk to in a way which with other technology adoption, it really hasn't been.
A lot of other technology was quite focused on infrastructure, changes in infrastructure, for example, cloud adoption, which was revolutionary, but not quite the same way in which AI is going to revolutionize business process and value and top line value for many organizations.
That's where the difference is. Risks are understood in principle.
Now it's really, I think, up to security professionals to be able to get in there and reduce that as an opportunity.
And of course, with that comes the ability to use AI to be able to help to do that.
And that's the other thing which I think is a massive opportunity for us.
So I see organizations struggling with that, different levels of maturity.
I would say that that is underpinned by the level of AI adoption awareness in those organizations, regardless of what their starting point from a security posture is.
It's more about adoption.
Let me put you on the spot for a little bit. What percentage of your clients do you think have the C-suite really understand the risks that AI can bring to the table as they're pushing hard to get AI to really drive value for them, drive revenue for them, et cetera?
Would you say it's a majority? Would you say it's less than a half?
I'd put it as around a half. So I think that it's actually, I'm quite optimistic about many organizations understanding that there are risks associated with doing this.
And therefore, because I see it not manifesting itself in necessarily a security discussion, I actually see it manifesting itself in how AI adoption happens in those organizations in any case.
Those organizations that are thinking about where to focus AI adoption in their businesses and to really where they're experimenting and how they're experimenting, when they do that and they're thoughtful about that, then they're thoughtful about managing risk as well.
Where do you think third parties come into play?
Partners. Of course, you're a big strategic partner for a lot of companies.
Of course, Cloudflare does that as well with a lot of our clients. Where does that fit into, one, being able to understand the risk, two, being able to articulate the risk, and then three, being able to mitigate that risk?
And so across those dimensions, where do you see the role of partners and how much of this, and of course, it's going to vary by company and by industry and all of that, but going forward, let's say two, three years from now, where do those strategic partnerships come into play and where do you think they could make the biggest impact?
I think it's absolutely foundational.
Every organization in any walk of business, walk of life, has some form of partnership arrangements in place in their IT ecosystem.
It's an ecosystem by name and that's why it's there. So partners, I think, play a really, really important role.
One of the roles is just making sure that from an enterprise organization's standpoint, understanding where their risk is within that partner ecosystem.
That is still very challenging for many organizations.
Again, a lot of AI and we're developing a lot of AI tooling in IBM that helps discover where those risks are and allows organizations to manage that much more practically with much greater transparency.
So I think that's one angle.
I think the other angle is partners come with knowledge and come with expertise and more to the point, come with tooling, right?
And often some of that tooling itself is the AI tooling that's been deployed in that partner ecosystem may help mitigate risk, but it also may introduce risk as well.
So from an enterprise risk standpoint, really understanding where that risk is and how, in some cases, those tools can be embraced to help ensure that data is managed and access is managed around some of the AI that's being deployed.
And I think then the third thing is where partners can really help as being specialists and be able to provide services like, for example, we do at IBM, Cloudflare does as well and pass it for its clients as well to be able to provide that security tool set to help those organizations manage the risk in their ecosystem so that those tools can be deployed, not just in the enterprise itself, but across that ecosystem where that's appropriate as well.
So I think partnerships are really, really essential in all those different ways.
And I think they're going to become even more essential as the IT landscape becomes more distributed, but distributed in a way in which it's going to be more AI controlled, if that makes sense.
And therein lies a challenge in terms of how you manage that particular risk.
So let me throw another one at you, which is, I've been thinking about this and I feel that three, four years from now, based on kind of where we're headed, a lot of organizations are going to, what I call, have a thin IT and have a lot of these ecosystem partners provide specific capabilities around this notion of individual specializations and then some looking across the ecosystem to provide a holistic view, et cetera.
And by the way, I mean, a lot of companies we're talking to, 87% of CISOs are consolidating their tool set right now because it becomes a really messy environment really quickly.
So how do you think about this notion of going forward, what does the IT organization look like and is ecosystem partnership become a lot of where the work happens, where there's a lot of coordination happening within the traditional IT function?
Yeah, I can see that.
And I can see that already beginning to emerge. I mean, here at IBM, we talk about the autonomous security program, for example.
And there, that's really about how we can now use the agents that we build and how we can snap in agents from many partner organizations to do, as the name suggests, which is actually allowing our security to operate in a much more autonomous way where we don't have necessarily so many people necessarily involved in that, but more to the point where you see now collaboration happening between different security domains because the agents collaborate because they know they need to in a controlled way, obviously, but making sure that then outcomes are managed in that almost autonomous way, which is sort of where we've been striving to get to in security for so many years, which is it's less of a thing on the side, the bolt-on, all those traditional expressions that we know, but much more woven into.
And we see the idea, this notion, again, of shift left in DevSecOps, where it's not, it's sort of shift anywhere now, if that makes sense.
So all these things we're already beginning to see emerging.
And I think those ecosystem partners that can come with that expertise to allow that orchestration to happen, where you're going to have in those different discipline areas that being introduced in a way in which it is going to interoperate.
So that, of course, then begs the question, well, agents are going to do a lot of the heavy lifting that IT has often done in the past.
It's got to be done in a controlled way. The risk is still an enterprise risk, has to be managed by that.
That's a very key consideration, depending upon the activity that that enterprise is undertaking.
So there's this notion then, well, what do those ecosystem partners come with?
And there's, I see a notion emerging now of wisdom.
So it's sort of skills, knowledge, and training where that is sort of packaged up into wisdom.
And that is sort of given as part of that partnership to enable that organization to take that ecosystem partner and to really use that wisdom in the context and the environment.
If it's in security, to be able to do security, it's many other areas of IT as well.
So that's what I think we see emerging. Of course, that's a huge shift for many organizations.
Would it be fair to say that going forward, there'd be a lot more consolidation?
There'd be a lot more strategic partners that'd be providing a holistic set of capabilities around, as you said, wisdom.
I think, again, is that where this is headed in terms of vendor landscape, ecosystem landscape?
I think there's going to be still a big space for lots of innovation. I mean, the TTPs are changing all the time.
There's lots of new innovation that's happening.
But I think that's going to happen in quite a different way. And the difference is now less about a new tool gets marketed, gets bought, gets implemented, and then we end up with the well-known, well-understood security sprawl, which, as you said earlier, we're all trying to platformize our way out of that problem that we've created.
I still think there'll be a big play for innovation, but that's going to manifest itself in AI tooling.
And I think that AI tooling is then going to need to be orchestrated into those ecosystems and those ecosystem partners that can deploy that effectively in the context of the enterprise that they're serving.
And that, to me, is really exciting because I think it's going to move away from the tool sprawl challenge that we've had and still continue to have, whilst also providing that platform approach, but also that platform approach where a different AI tooling can be snapped in, brought in, and brought to bear in the right way to manage the risk proportionally for that particular organization, that enterprise.
And I think what's really exciting about that is a couple of other things.
A, it's going to become much more granular. So I think that actually offers us the opportunity.
A lot of the security tooling we have is pretty broad brush, right?
It's pretty well, we just need to do that thing regardless of what type of enterprise it is, what type of sector it's in.
Now I think we're going to have the opportunity to develop tooling and solutions that are going to be much more granular and focused on the real threats and risks that that particular organization faces.
And then I think the other thing when we look at that is that it's going to be much quicker and much faster to be able to pivot to what is going to need to be put in place based upon the types of threats that an organization faces.
And then lastly, the thing that I think is one of the most exciting things is we've always had a real challenge in the security space of really being able to contextualize.
Yeah, I think vulnerability management, you know, the sort of the IP address type approach of really trying to resolve IP addresses to hosts and then understand what those hosts, what type of applications are involved in that.
All of that real difficulty that has existed for many years in security, you now see that some of the AI tooling that we're approaching in terms of how we're managing data and being able to interrogate and get context out of data without having to sort of copy and paste it and move it into repositories, being able to do that and use AI tooling to go and discover that.
That is really exciting because then I think you can really say you can build AI tooling into security, but that then can be applied to now rich context, which we haven't been able to do before.
And at IBM, we are now working with a number of partners to really try and crack that.
But trying to not do that, clearly in a way which is all about trying to gather yet more data and put into ever larger repositories, which we just know trying to create a sort of carbon copy of an IT landscape for an enterprise is not the way to try and do that.
Well, let me flip that around for a second. You mentioned agents.
You mentioned the fact that there is a lot richer tooling specific to organizations.
By the way, we're seeing a lot of that as well. The threat landscape changing to industry -specific and then individual company -specific because it's so easy to kind of do the reconnaissance and so on.
Having said that, though, I think when we move to an agentic notion, does that also introduce the risk of, yes, the agents are going to provide that capability or process change, et cetera, but if the original process isn't optimized or isn't right, or if the original data set is messed up, we're actually exasperating the problem here.
Talk a little bit about the risks of moving fast in the notion of an agentic world where we may not have a lot of control over our data and the environments that we've collected this data over time.
Is that going to create a bigger mess?
I'm going to say no to that. It's not going to create a bigger mess, but it is a really, really important factor.
The way we think about that at IBM is, number one, there's a bit of a misunderstanding about the AI world still, which is there's quite a broad spectrum of AI that we use in conversational interfaces for doing stuff in our home, and then the real enterprise deployment of how you're deploying that for the benefit of really creating models specifically, small language models that do specific tasks that can learn those tasks based upon a corpus of data and understanding from maybe many years at IBM.
We've got many years, for example, operating security operation centers and the workflow in there.
We've managed to codify that into small language models and then have agents that work on that.
Then agentically, they work together with each other to create the output that we're seeing, which is dramatically speeding up and creating much more accurate output compared to some of the ways in which we've done it in the past.
The challenge, though, that you're addressing or talking about is when you get to bigger datasets and you don't necessarily have the outcome that you would expect, the tools will start learning stuff and there's plenty of examples in life where you don't get the outcome that you necessarily expected.
Good news is, of course, you can use AI to help you there.
So look, there's some well -known techniques that we have about and tools that are now emerging to help with that.
At IBM, we have deployed reflective agents. And so in our workflow processes, number one, we've got to get the foundations right.
So let's deal with stuff that we do know.
Number one, foundationally, there's data security around this, which we know a lot about already.
Number two, yeah, we know about models and how they behave.
And we know that if we limit the way in which we look at those models and the set underneath the agents, that we can be quite careful about ensuring that we're only dealing with certain datasets, which we can, as I said, secure from a data security standpoint.
And then when we deploy agents and bring them in, of course, there are a load of considerations around, can you want to do that with Zero Trust principles at the start through limiting access?
So all those things we know quite a lot about already.
In the world, there's lots of great things that are happening, but there's a lot of stuff that we know a lot about already.
So we can use that and use that knowledge to be careful. And as I said before, what we do as well as doing that, we then can also use reflective agents that we deploy to then interrogate those agents that are doing that work to say, is this the outcome that we would have expected?
So we can teach them, which is what we've done, to say, this is normally what you'd expect from this.
Is that the outcome that you are seeing from the agent that's doing the work?
And then you can put a scoring around that or a check around that.
And of course, the agent then can alert to the fact that that particular agent that's coming out with that outcome, there may be an issue with it.
And that drift, if it does happen, can then be checked and brought back.
Those things are pretty exciting. They sometimes require my brain to have to think quite hard to how that works.
But I see that emerging more and more.
It's pretty exciting stuff. And if you're going to be operating as we already are now in IBM agentically, where they're working together, that's absolutely essential.
But then we go on to the next phase, as I said, with autonomous security in the way in which we're thinking about it, where you almost have now agents operating ambiently, where they're now beginning to elect to work with each other within the parameters that you set them to then be able to provide outcomes that you didn't necessarily think that you'd expect.
And we're seeing that happening all the time now.
So, for example, in our autonomous security approach, we'll see a known threat vector, perhaps a threat exploit happening.
And then the output of that could be not just a SOC-type investigation report, but all the way through to here's a remediation thing, here's a containment piece of activity, here's workflow and ITSM that is teed up, but here's a script for the firewall.
And the agent then asking, do you want to actually implement that so that there's some checks in there to make sure that it doesn't just go in dirt?
But way beyond that, which we originally thought that it would do, it's that ambient operation now producing results, which are pretty cool.
Let me ask you this again.
Where do you think, if let's say this journey that you describe of an agentic world in the future, how far along are we in that journey?
Are we like 10%?
Are we 5%? Where are we? And again, of course, it's going to be a moving target going forward as well.
But how far along do you, what's the maturity that you see in a lot of the clients?
Great question. And I want to give you an example of something that happened the other day.
So I sat there with a client and they said, you're AI, you've deployed this.
And I didn't think it's quite doing what I expected it to do.
And I was like, okay, so tell me what. So where's the investigation report?
This is a particular family of agents that we have, our autonomous threat operations machine that operates in the SOC environment.
And I said to them, well, there isn't one.
Why would you need a report? The agent has taken the action and the remediation that's needed.
So there doesn't need to be a report because no one's going to read it.
No one's going to do anything with it. If you really want it, I can instruct the agent to go and generate it.
That's easy. But the action is already taken.
It's already taken. So why do we need a report? And it's like, there's almost this light bulb moment where the client went, wow, of course.
So where are we going with it? I think it's almost like, we don't really know yet.
But what I can say is, it's almost about getting out of the thinking that the traditional workflow that we've been used to is the way in which things are going to happen.
And now we're already seeing dramatic differences in how things are operating.
And security orchestration used to be a thing. And we've focused our agents on trying to emulate a lot of stuff that we've done in the past.
But as I said earlier, we're now actually quickly realizing that the agents are sort of making up their own mind.
I don't mean it like that. But producing incredibly helpful and different ways of doing things, which arrive at a speedier and more accurate outcome, but not in the way in which we thought we'd originally considered.
Well, I think that touches on two notions. One is this notion of transparency in a lot of it, right?
So if the agent is autonomously taking action, to have a human in the loop to just kind of verify, are these actions really appropriate in this case?
Or learnings, as you said, within IBM that you've done over the years to kind of make that call.
But the other thing to kind of think about as you think about autonomous is this notion of resilience now moving from this last 30 years we've waited for threats, and then we've reacted to it, as opposed to now saying, let's figure out the anomalies, let's figure out the gaps in cybersecurity and threat landscape, and let's proactively address that.
And that shift in mindset is very hard for some traditional cybersecurity leaders that have grown up with this notion of a reactive security.
Reactive security, tool selection, market scanning, all of those really good things have done a lot and been this incredibly deep expertise in those things, a lot of which is moving on extremely quickly, and which is going to have less utility in the future.
There's no two ways about that.
But I come back to my point about wisdom. All that experience still comes with wisdom, and it still comes with an understanding of proportionality of controls, of the ability to be able to do something in a business context.
I still think there's a huge place for a lot of that, which is some of that based upon understanding of environments that sometimes is gained over a period of time.
But equally, there are, I think, a number of things in the security arena, like in many other arenas now in IT, that are going to become redundant fairly quickly.
Now, let's shift gears for a bit. I know IBM has known and has invested significantly in quantum, and you've stated that, and you've actually warned, that we have a hard deadline for quantum readiness for companies.
How do you think about where companies are today in their understanding of quantum and the fact that they need to take action now?
I've been told that it's similar to Y2K. We'll get to it when we get to it kind of notion.
And of course, we've been at the forefront of really pushing that notion of quantum readiness and the fact that all of our traffic is quantum ready right now.
And so, how important is this? And how important is it for companies to think about and understand what they need to do today?
And how are you advising them how to think about it and how to plan for it for the next few years?
Well, firstly, Khalid, you beat me to it. Cloudflare is right at the cusp of having deployed post -quantum resistant algorithms.
It's a tremendous, shining example, I think, in the IT ecosystem about how one significant player where you have such huge reach has really embraced what's happening and what is going to happen.
Amazing. And really, when you look at across the IT landscape, it's pretty unique.
There are others who are doing stuff as well, but Cloudflare really has been at the forefront of that.
And it's very impressive. When I look at enterprises, though, I think the whole notion of, of course, it's the quantum event of when the computational capability is going to be there, that asymmetric encryption is going to be vulnerable.
Specifically, RSACC and the like. That's really where the quantum piece of this ends, almost.
I sort of think when I think about the quantum, about post-quantum cryptography, I think a bit about the fact that a lot of organizations are fixated on the fact that this is what quantum brings.
And of course, trust the security community to come up with a big downside use case for quantum.
Actually, there are unbelievable opportunities that are going to come from quantum computing in so many different ways.
It's going to change our lives in so many different ways.
But we are talking about this particular challenge with cryptography, because that's what it is.
It's a cryptographic challenge.
And I want to come back to AI, because I want to talk about the fact that the proliferation of AI agents, we're talking about billions of agents that are going to become available pretty quickly.
The lifecycle management of AI comes with a lot of cryptographic considerations.
So where are keys going to come from, tokens, you know, all the different things that we know that we need to manage, how we introduce these agents into the environment, secrets management, all those things.
If we think about other areas like certificate management, we see the life cycle of certificate management and life cycle certificates coming right down.
So we have a challenge managing cryptography today, regardless of what's going to happen in the post-quantum era.
I would advise any organization, yes, the quantum situation provides a compelling event when there is going to be vulnerability in cryptography that doesn't exist today.
But the reality in terms of actually managing cryptography, which underpins what you do about when that happens, is here and now.
And the reality, when we think about most enterprises, how they manage their cryptography and everything that goes with that, it needs some pretty urgent action, not just because of the post-quantum crypt challenge, but because of just how the landscape in terms of IT is changing.
So really, I urge any organization to now start getting onto the program and starting on the method of discovery now, because that's really the first point.
Where is that cryptography? How is it currently managed?
It's not something that most enterprises, most organizations have thought about.
A lot of it is hard-coded. A lot of it is very ephemeral in containers and the like.
How all of that is managed and how that crypt is introduced into the IT stack in most organizations is done well in some cases, but maybe not necessarily in others.
And of course, the supply chain is absolutely key for most organizations.
Now, all of that can be done right now. It really needs to be done.
The quicker organizations get on and do that, the quicker they're going to be ready, having discovered what they have, to then take the necessary action.
So that step of discovery can start right now. And good news at IBM, we've done some stuff.
Firstly, we understand it well. We're working with many clients around this.
The second thing is we've developed some tools that can actually go and help with that discovery to understand where that crypt is now.
And then secondly, formulate the plan of what you then do about how you can manage this, manage the whole crypt landscape in a much more effective and agile way.
So we've come up with this notion of crypto agility. Now that then prepares the ground for when you get to the point where, ah, we now do have to deal with the fact that these cryptographic algorithms are now vulnerable because the era of quantum compute is now here and the computation ability to run Shor's algorithm exists.
We now need to be able to implement, again, developed by IBM, the post-quantum resistant algorithms that we can then introduce into the environment.
There will be more than the ones that exist today. We see more being developed as well.
So this, what I think this is, is a big change in how we're going to think about cryptography into the future.
It's an area that is not particularly well understood, as I said, and it's an area that we need to start understanding now with this compelling event.
But it's by no means that compelling event that is going to make it the reason why you have to do it.
One final thing on this, Khalid, is, well, there's many things to talk about on this, but one thing I would point out, it really isn't a Y2K issue.
It's not a cliff edge thing.
Organizations need to act on now and even beyond the point at which there is that vulnerability that's been introduced by the quantum capability.
There will be many techniques that we use where we will still be running, you know, legacy cryptography for years to come, and that transition period will take time.
That starts with understanding what you're dealing with.
And that's, to me, the most pressing issue which needs to start right now.
I did a keynote at one of the conferences and I talked about PQE.
And after that, five or six people came up to me and a couple of them said, you know, I understand this, but my CEO wouldn't give me the budget to deal with it.
Right. And my answer was, you actually don't need a lot of budget to understand and do an assessment of where encryption is in your environment.
By the way, it's going to be in nooks and crannies over the years that you kind of will find and experience.
And at this point, all we're asking you to do is have a very clear view of where it exists and how to really mitigate those risks as and when they come real.
Right. We're not immune to the fact that we know that when certificates expire and we get surprised by that, we know what happens.
So we know what the impact of this is going to be when we have crypt and other forms of artifacts that we can't manage properly.
Again, AI plays a role here because we have in IBM developed agents that can help do that.
So that's pretty exciting.
I think the other thing as well is there's obviously an area around identity, identity governance as well that comes with that.
Secrets management, you know, traditionally those two worlds have been somewhat separate.
I see that those things coming together now a lot more, which is really fascinating, quite interesting.
You know, IBM, we recently acquired HashiCorp and the HashiCorp Vault, which is a extremely well-known service that we provide for many organizations across the globe.
You know, that's a really important tool that's used in that developer community a lot in runtime security with containers.
All those things need to be brought together.
Now's the time to really think about that, to get after it.
The time is not to do that when the event has happened and there is that vulnerability sitting there.
The remediation plan, which you come out with, is going to be really different depending on which enterprise you're working with and how the supply chain is operating.
And there'll be lots more tools, not just about post-quantum-resistant algorithms, but encapsulation techniques that will enable a hybrid world of crypt to be run for some time as we make that change.
Necessarily, the post-quantum-resistant algorithms are much more complex, much bigger.
And so, you know, many pieces of technology that we run today simply won't be capable of running those.
So there's going to be a plethora of techniques. I think it's pretty exciting and a pretty interesting area, but one that has to be thought about and moved quite quickly.
I think your point about budget is really spot on.
Organizations spend a lot of money on crypt today. Getting after it, actually cleaning up the way in which you do that, maybe even saves you some money.
Exactly.
So, Mark, shifting gears, one last question before we move on to our rapid fire.
I want you to think about, I mean, we keep hearing in cybersecurity about platformization, about consolidation, about kind of things changing.
Maybe two years from now, how do you see, for an average CISO, what is the environment look like?
How has it changed two years from now? Is it more platforms? Is it more consolidation that has led to that platformization?
Is it more, as you said, strategic partners bringing in wisdom and helping create agents?
Paint a picture for a CISO in two years.
What does that look like? Well, I think firstly, it's going to fundamentally change in terms of the notion of, you know, organizations, running tools, lots of people, that will really shift.
And we're already seeing that shifting already.
We've seen some dramatic changes in those traditional SOC environments and the like.
So that's definitely going to look different.
I think you've mentioned, and we've talked about platformization. I think that's massively a trend that is happening.
However, we're going to come back to the beginning, which is debt and technical debt.
Legacy environments that exist, they are still going to be there.
Let's not forget that many of our batch processing runs on data centers that are decades, decades old.
And I don't see that changing dramatically.
I obviously see the AI being deployed across that, being able to really change the approach and how we manage some of that.
So I think platformization is going to continue to be a theme.
But I think the environment of that increasing threat landscape across the exposure, the attack surface and the exposure management that needs to happen with that is going to become much more prevalent.
So what I see is far fewer people being involved, lots more autonomy. Hence the reason why in IBM we're focused on autonomous security, where a lot of the traditional security workflows will be run as part of the IT workflows, which I think is a really good thing.
And that's going to accelerate. But I still think the need for that ability to be able to understand and assess risk, understand and apply controls proportionately, and being able to have that knowledge within an enterprise that can do that is actually going to become even more important than it is today.
Let's not forget that there are some disciplines like identity, access management.
All of that actually takes on a much greater importance in the agent -centric world that we're moving to in the agentic world.
So I think it will look different, very different, far fewer types of roles that are doing functions which will be replaced by agents.
But equally, that knowledge, that hard-fought knowledge that comes from those processes, lots of people are going to be elevated now and going to be able to fill some of those skill gaps that come to develop agents, to be able to do new and more exciting things, really then manage risk proportionately on behalf of that enterprise.
To me, it's like where I've always felt that security should and could go.
That to me is really exciting and one where I think we should move to as quickly as we can.
So that reminds me of two things.
One, I had this conversation with a CISO who said, I have not hired an analyst in the last three years.
Every time I want to do the job of an analyst, I spin up an agent to do that.
So a lot less analysts in cybersecurity. The other is this notion of superminds.
So Tom Malone at MIT wrote this book called Superminds, which is basically articulating the fact that every one of us is going to be 10x more productive, and that's going to require a lot more understanding of a broader set of spectrum of things for us.
So I guess you're kind of referring to both of those things as the future of cybersecurity.
This is a phenomenal conversation.
Let's move to the rapid fire now. I'd love to get your perspective on five questions.
Let's start with the first one. If you had to describe the current state of cybersecurity in one word, what word would you use?
Dynamic. Dynamic. Okay.
In a positive sense, I'm assuming. Very fast moving. Lots of new stuff happening.
Lot of excitement. A fantastic time to be in cybersecurity. And a fantastic time to be a CISO?
Totally. Okay. All right. Most overused buzzword. If you think one security buzzword that we need to retire, what would that be?
Next gen.
Next gen. Love that. Maybe it's two words, but it's next gen. There's no more next gen.
I mean, everything's next gen now, isn't it? Love that. Love that word. What is the biggest obstacle?
And again, you talk to hundreds of companies in managing cybersecurity today.
I'll give you a couple of options. Complacency, scale, complexity, or something else.
Governance. Governance. Okay. Okay. Why do you say that?
In many respects, what I see in most organizations, the ability for the CISO to play an active role in the C -suite, risk-making decisions, be actively involved in strategically where the enterprise as a whole is going, and be able to insert themselves and their teams into those processes that consider how the enterprise is going to move on in this now AI-driven world, I think is absolutely critical.
And I see the CISOs who are embedded well into the IT environment with the CIO, well into the lines of business and the various parts of the business, governance frameworks where they can then bring those risks for consideration, and their investment, if that's necessary, can be made, or at least considerations can be put in place.
Deployment can be controlled with the right frameworks needed around that.
That is unbelievably effective. And what never ceases to amaze me is extremely cost-effective.
It often doesn't cost very much to do that. All those processes exist within many enterprises.
Sometimes, where I see individuals and teams not being involved in those governance processes, then really things can get out of hand pretty quickly.
So maybe a word that you weren't expecting, but it is one which I think is so, so important.
And it doesn't sound particularly exciting, but I see that when deployed effectively with the right people, with the right skill sets in those conversations, and with that right framework of managing risk, it really, really helps.
A lot of CISOs will talk about, I need to be in the board. I need to talk to the board.
And absolutely, there's a need for a board to understand the threat landscape, how that might impact strategically the organization.
But quite often, there's a bit of a misunderstanding.
Boards, in my experience, often don't do a lot of, they don't do operational stuff.
That's not what they're there to do.
And it's actually, it's in the subcommittees and the capital allocation committee, or the how IT budget is allocated, or where the risk committee is, or where the data privacy is.
All in those, the apparatus of large enterprises and how they function, it's in those places that the decisions are made, where really, the really big impact can be had from being able to understand how then the security controls are applied proportionately.
The reason I was surprised was that there is this notion of governance that I think we've done a disservice by equating governance to compliance.
And this is way, way, way beyond compliance that we're talking about, right?
And so a lot of times, a good barometer for me is, where does cybersecurity sit in the board subcommittee?
If it's in the compliance subcommittee, that's a problem. Indeed, because it is just that.
But we were looking, and that's exactly the very place it shouldn't be.
It has to be forward looking. And the idea that we go for the board presentation, that's it.
The board is not going to hand out any money. It's not how it works.
Yeah, no, that's fair. That's fair. Now, fast forward. And again, five years is a long time in today's years.
Corporate leaders will need to be fluent in X.
What is that X? AI collaboration. And what I mean by that is, being able to understand how we talked earlier about ecosystem partners and what they can bring.
There are people necessarily need to be involved in understanding how to apply AI in the right way.
AI collaboration in the sense of how the AI collaborates with itself.
It collaborates together and also within the extended supply chain.
I think that's the fluency that we're going to need to see. Not becoming experts in AI itself, saying, well, the AI just does everything and that's it, but understanding how AI can really bring that benefit and how it can collaborate.
Those organizations that can really maximize humans plus agents plus AI in all its shapes and forms across the ecosystem partners, across supply chains, they're the ones that I think are really going to be successful.
And that's the thing I think in not just five years, but from now on in, being able to understand that AI collaborative landscape is going to make organizations, those that can do that will be much more successful.
And that's as much for business leaders as it is for technology.
Totally. Totally. And pretty well across the board, I think. Okay.
One book, podcast, or other media that you have read recently or come across recently that may be relevant for cybersecurity leaders, and it doesn't have to be technology or cybersecurity related, in terms of dealing with the change and the dealing with what they're experiencing today.
Yeah. I'm going to point to a book. I need to remember the title, The Complete Checklist.
I may have to look it up just to confirm what the name is.
So it's written by a surgeon. Yes. And I talk about this because my- Al Gawande, I think.
That's exactly it. I think I've got the name right, haven't I?
Exactly. So you know the one I'm talking to. It's not only about cybersecurity.
He wrote that on the basis of surgical procedures and having checklists and how that can lead to much better outcomes.
But the reason why I quote it here is we're pretty used in cybersecurity to having checklists and procedures, and I've talked extensively today about how we can train our agents and our AI to do that.
In itself, that's something that we want to train our agents on, to be able to understand those processes.
And then I say, and then enhance them, as I said earlier on, and do new things with them.
So I think that's pretty important.
But the reason why that is probably the most important reason is that the thing that really came out to me from that book is about the fact that it still needs human judgment.
Human judgment is still unbelievably important. And I think that's a really good thing to think about in terms of what we're doing.
We can create loads of checklists, agents to do all that work, and we can get a lot better at it, and it will undoubtedly lead to better outcomes.
And the agents will produce, we're already seeing, faster and more accurate outcomes.
But it's that human judgment that we still need to have that is really going to make the outcome the one that we really want.
And he doesn't talk about it in the book so much, but it's also another notion that comes out in the same field about civility saving lives.
And I think that's another thing, which is that as we adopt a lot of this new technology, there's a tendency, I think, for a real sense of we've got to move quickly and pressure.
And I think we must never forget that that human judgment angle comes from the fact that we have to work collaboratively.
We have to work in amongst ourselves with a very changing environment.
Getting the best out of each other to be able to apply that human judgment, I think, is never going to be more important.
So that book is the one that really stands out for me. Funny enough, nothing to do with cyber per se, but I think...
Which is perfect. I think it was called Checklist Manifesto.
The Checklist Manifesto. You're absolutely right, sir.
Well, thank you, Mark. This has been a pleasure. It's a fascinating conversation.
And I know you sit at a place where you see a lot of enterprises dealing with these issues.
So thank you for your time. It's really a pleasure to be here.
Thank you for having me. Really appreciate it. Thank you. Thanks.