Presented by: Khalid Kark, Mike Hamilton, Daniel Kendzior
Originally aired on May 29 @ 12:00 PM - 12:30 PM EDT
AI is transforming cybersecurity — both as a powerful tool and a dangerous weapon.
Threat actors are now leveraging AI to launch attacks faster and smarter than ever before, outpacing traditional defenses. To stay ahead, organizations must adopt AI-driven strategies to protect their cloud-first and hybrid environments and prepare for the emerging cybersecurity trends that are reshaping the threat landscape.
In this episode, Khalid Kark, Field CIO at Cloudflare, is joined by Daniel Kendzior, Managing Director and AI Security Lead at Accenture, and Mike Hamilton, CIO at Cloudflare. Together, they dive into the real-world impact of AI in cyber, the evolving risk landscape, and the critical steps CISOs and CTOs must take today to secure data, scale infrastructure, and stay resilient.
Watch to learn how to harness AI for defense — not just detection.
Khalid Kark is a globally recognized technology strategist and Field CIO at Cloudflare, where he works closely with C-suite leaders and board members to shape secure, scalable, and resilient digital strategies. With over two decades of experience at the forefront of technology leadership, Khalid helps organizations navigate the complex intersection of business innovation, cybersecurity, and enterprise transformation. Previously, Khalid led Forrester’s Security & Risk and Technology Leadership practices and served as Global Managing Director of Deloitte’s Technology Leadership Program and chaired Deloitte’s Tech Eminence Council to elevate thought leadership in AI, cybersecurity, and digital innovation. Follow him on LinkedIn and X
Daniel Kendzior is Accenture's Global Data & AI Security Lead, bringing extensive global experience in driving large-scale information security transformations. Deeply passionate about orchestrating organizations, he ensures the seamless integration of cybersecurity into the core architecture of products and services, effectively transforming cybersecurity into a business enabler. Follow him on LinkedIn and X
Mike Hamilton is the Chief Information Officer of Cloudflare, the leading connectivity cloud company on a mission to help build a better Internet. As CIO, Mike leads Cloudflare’s IT organization, focused on business and IT alignment in developing system architectures for Cloudflare at scale. Mike joined Cloudflare with over 25 years experience in IT including senior leadership roles at MuleSoft, Salesforce, Databricks, and GM Cruise. Follow him on LinkedIn
English
Transcript (Beta)
The challenge with information security a lot of times is that compliance feels like security, but it's not.
True security has to be a different effort. Compliance comes from good security, security doesn't come from compliance.
Hi everybody, it's a pleasure to be here today and be talking about a topic that we've all experienced in the last few years, adversarial AI.
We have two very distinguished guests with us today.
We've got Mike Hamilton, who is the CIO at Cloudflare, and we've got Daniel Kendzior, who is the Managing Director of AI Security at Accenture.
Before we go into the conversation, there's a lot of talk about AI and how that's changing the threat landscape.
I would love both of you to maybe talk a little bit about what you are seeing from your perspective.
How is the threat landscape changing?
I was talking to somebody who said, you know what, the threat landscape is the same, it's just a lot faster.
But we'd love for, maybe Mike, you can start us off with where you're seeing the threat landscape change.
I think there's some truth in the statement that you just made, that some things don't change.
Social engineering is still the easiest vector. It's the cheapest vector.
But what we are seeing is an increase in the sophistication and consistency of attacks in social media, or social attacks.
The email doesn't have errors anymore.
It doesn't have spelling errors. And then going into the darker side of, you can fake someone's voice, you can fake someone's identity.
As much as things change, they stay the same in ways.
The vector is about the same, but the sophistication is going up.
And the cost per attack versus the effectiveness of the attack is getting much, much more sophisticated, which is bad for businesses.
Because we're having a budget inside our businesses to defend the company, but also innovate and drive innovation and make the business more effective.
Bad guys are organized. They're using low-cost attacks to generate money to fund R&D for high-cost attacks that are their higher yield as well.
So their budget is much more focused than ours.
I think that's where the challenge starts to really come from.
Yeah, I totally agree. I think just to build on that, you know, traditional whaling attacks, which you saw, which were pretty labor-intensive, had a lot of reconnaissance done, were really kind of targeted to executives.
AI is now allowing you to really kind of push that down the stack.
So you're kind of taking everyday individuals, now they're kind of facing different types of phishing and smishing and everything that's coming with that.
And to the point around deepfakes, it's really kind of a much richer experience.
And so not only the speed, I think speed is always going to increase, but just really that richness, I think, is making it much more intense and more effective, unfortunately.
Yeah, I was just at an event and the first question for AI was, how is this fake news and fake identities going to impact us as individuals and companies?
And maybe you both kind of mentioned it a little bit, synthetic identities and the fact that you're able to create these identities that may or may not exist.
How is that shifting how companies are thinking about really protecting against that, against phishing, as you said, Mike, continues to be a pain point for a lot of the companies.
But how does synthetic identity kind of come into play in kind of making that even more exaggerated?
I think a lot of organizations are now understanding that this is the new normal.
And so while identity was always hard, it's harder. But now we also need to be very thoughtful when we think about digital registration processes is a great example.
So, you know, we had some clients and some organizations that would use remote registration, either for a new employee, a new customer, etc.
Now, in this deep fake enabled world that we live in, you know, this initial interaction an organization might have with an individual, you need to have some skepticism of whether that's actually a real person.
So not only from a, hey, is it the person that's presenting the ID or the correct documentation, but is it actually a live individual?
Right. And so this kind of combination of traditional identity proofing, user registration, biometrics, now with deep fakes, there's a lot more layers and sophistication.
And at the same time, customers are kind of have higher expectations than ever about a transparent, smooth, easy experience.
So the challenge for cyber defenders and technologists is how do I make that as transparent, but as impactful as possible?
It's hard because human behavior is really the exploit and patching the human brain is hard to do, almost impossible.
But it's especially hard to do when you're talking about exploiting cognitive biases.
So when somebody sees something that they want to believe, they're inclined to just go ahead and believe it and take it at face value, which makes it much more difficult.
And as a civilization, we have to start to think about how we deal with that.
Like what becomes authoritative and how do we help people understand what's authoritative?
Inside organizations, these sort of fake attacks, these fake persona attacks are usually supported by better controls, right?
Like you can help deflect these things by saying like, hey, the CFO can't just call you and say, I want something and you give it to them.
Like if it's sensitive, there's a control that we put in place.
But even the controls, to your point, have to be more sophisticated now where there needs to be an ephemeral component.
Like where are you? Who are you? What devices are you on? And I think Zero Trust becomes a big part of that long term.
That like the process is systematized in a way that it's predictable, that you know that you have to go to a place to execute the process.
A phone call is not enough. A Zoom meeting is not enough.
You don't just wire money, things like that. But we do have to sort of raise the baseline of how we leverage technology inside businesses to ensure that the process controls are actually enforced and the only way to get something done.
I think one element that I see really interesting in all of this is trust.
So companies that have built their reputation on trust, do they need to do something else?
Do they need to think about trust as a, again, with all of these fake identities and conversations and news and so on, does trust become a key factor for companies to really hone in on?
Any thoughts from either of you around trust? Yeah, I mean, I think trust is really this new digital currency, right, that's super valuable both to a consumer but also to an organization, particularly one that's built their products or reputation around it.
I think part of it's cultural, right? Part of it is knowing that there are these emerging threats, these emerging risks.
How do you make your own organization even more comfortable raising them, slowing down things at times which might be causing friction with the business, et cetera, so that you're really doubling down on this implementation of trust as opposed to, you know, just a principle or a marketing component to it.
And that's hard, right?
So there's technology that helps enable that. But to me, a lot of that transformational human cultural piece of it is ultimately where we really need our leaders to continue to invest and lean in and create a place for people to be able to do that.
The one thing that, of course, you both alluded to this, at the end of the day, it's about people.
It's about organizations. It's about how do you think about reconstructing the thinking of what the op model is going to look like in this context, what the org is going to look like.
And, of course, it's a people problem.
So how are, in your view, how have you seen, maybe Mike, you can start with this, how have you seen companies adjust, whether that is through their operating models or more training or capabilities, whatever it is, how are companies adjusting to this adversarial AI kind of threat landscape?
It's a layered approach, just like it's always been.
There's a technology layer, which is that if I can stop the attempt from being seen in the first place, that's ideal.
Then it never had a chance to permeate my environment, and that's very key.
Back to my patching the human brain problem, that it takes behavioral reinforcement.
We have to reinforce that, hey, you can expect this is coming.
And I think that's still one of the biggest steps organizations can take, is make sure that people understand you can expect to see this.
I've seen the most success come from, in fact, being really honest about, hey, we're seeing vishing attempts.
We're seeing smishing, all the different type of attempts.
Being specific about it without getting lost in the terminology, though.
Make it accessible so people understand, because most people can relate to being attacked in some way.
There's been the random toll attacks, for example, recently, where it's like you're getting a text saying you crossed a bridge and you didn't pay the toll and now you have to pay me.
They work really well.
But back to my point on organized crime earlier, that little attack is being used to fund R&D towards more sophisticated attacks.
We actually have to stop the money to a certain degree.
Organizations have to spend some energy on trying to stop the flow of money, and that involves helping everybody.
It goes way beyond the workplace. I really think that we have to think about how do we help people also secure themselves at home.
When a worker goes home, they're going to check their personal email on their work computer.
That's going to happen, right?
That's another vector. They're going to get a text that's bad from their phone, but that text that they responded to is funding a bad effort, right?
We have to be a little bit more comprehensive as digital citizens to help our employees understand where the risks really are.
Yeah, our signals report kind of puts out this number saying that 68 percent, and that's based on the Verizon report, that 68 percent of breaches still happen because of human error of some sort, right?
I think going back to that human nature element of it and kind of making people aware of it is important.
But then now we start to see these bots really kind of driving a lot of the Internet traffic and so on.
Our report talks about the fact that 28 percent of what Cloudflare is seeing is Internet traffic, is bot traffic, right?
So, Mike, how do you kind of respond to that? How do you engage with kind of these bots?
And some of them are verified, most of them are not.
And so, how do you think about this world which has accelerated amount of this traffic that may or may not be really valid traffic?
And how do you engage with that in that context?
I like to think about this historically a little bit. So, the Internet in its early days was very much about how do I go from an intent to an outcome?
Sure. And artificial intelligence is the same. When we look at bots, it's how do I get from an intent to an outcome?
And we're still in the earliest stages of artificial intelligence, but the evolution so far has been initially people interacting directly with something.
I'm going to a webpage, I'm actioning my intent on this webpage directly or this app directly.
Then with APIs, APIs became the next wave of traffic where it's like, well, actually, you can sort of mash these two things together and get an even better outcome if these two services work together.
And so, now APIs started taking over a bigger chunk of the Internet traffic.
Artificial intelligence, though, is a multiplier bigger. So, already we're seeing with the GPTs of the world, the various models, that they're hitting websites at a 200X, 200 plus X more hits on a website to do the same thing.
Now, we would, of course, expect that to be dropping off over time, but I think the next horizon of security and the next horizon of information security is around what does the landscape look like and how do we manage these bots?
How do we ensure that we're not overpaying the cost of serving these bots to get the outcome that we mutually want with our customer?
There's still an outcome we're trying to drive together.
And I find it pretty fascinating as well that human language is actually not a particularly efficient way to do business digitally.
And so, that's why Google developed the agentic conversation language where once two agents determine that they're both agents, that they can have a conversation that's a lot more efficient.
This is going to start to drive itself towards a more cost neutral position.
Right now, it's a very expensive transaction. And we're willing to fund the expense because we're having to figure out how it's actually going to work.
But longer term, it has to get better. And so, I think companies are really going to have to think about managing bots from the standpoint of what's it costing me to manage bots and how can I make sure that this is getting a lot of value?
The second thing is I think there's two types of this. The more bots you have overall, the more bad bots can hide in the good bot traffic.
Absolutely. Yes.
And so, it's signal to noise now. Where is the signal and the noise? How do I know who the bad bots are?
And there's immediate solutions that can help with that.
Like we have an AI firewall. There's also API gateways. Our API gateway is excellent at this and understanding like does this attack conform to a schema or not?
Like is this a real bot that's trying to actually accomplish something or is this somebody playing around with different values in the header and the payload to do something nefarious?
So, there's tools out there that can make this possible for businesses to do a better job.
But it's getting increasingly expensive.
Let's shift gears a little bit. One of the things that a lot of companies are struggling with is investments in AI.
In talking to a lot of CISOs and CIOs, they invested significantly in building AI capabilities.
But then, there's pushback from the business.
There isn't a lot of value that you've driven in the last year or two years, et cetera.
And then, there is this huge risk. What additional risks is it posing, et cetera, in terms of kind of what you've built?
And so, when you think about AI defenses, if a company were to think about investing in AI defenses, what would you say, what one thing would you say, hey, you may want to start here.
You may want to think about this as a critical investment this year.
Maybe, Daniel, you could start.
Sure. Yeah, I agree. I think a lot of organizations, in their excitement for AI, have kind of turned it into the Valentine's Day truffle, where it's being added as kind of a high-end garnish to a little bit of everything.
And in some situations, it's great. In other times, it's too much or not yet.
I think in terms of where you, knowing that, right, knowing that AI is coming into things that you've bought already, things that you're maybe subscribed to from a SaaS or PaaS perspective, in addition to all the things that you're looking to custom development, I think anything that brings you more AI visibility is really where you want to prioritize, right?
Some of that's discovering of things you're doing on -prem.
Some of that's better understanding of what's at your edge or your perimeter.
But really, that visibility, if I'm sitting in any type of technologist's chair, if I can't see it, it's very hard to govern it.
It's very hard to secure it. It's very hard to make proper FinOps decisions on whether or not that's something that we need to double down on.
Yeah, you're right on the money there.
I just was reading this morning the news, Jamie Dimon actually telling us all that unless we put governance, there are millions and millions of AI agents that are going to make sure that your data is leaked in multiple places because you don't have the right governance models, right?
And so, again, the idea of making sure that not only do you have all those capabilities, but the governance around it is going to be really, really important.
But, Mike, any perspective from you on where would you invest or recommend people to invest?
I'd like to start by sort of having some empathy for everybody in the world right now, which is to say that if you work for a company, the promise is so transformational that everyone, every leader in every business is feeling pressure from the boardroom to their office, their immediate department on how do you leverage this?
What can we do? And there is some urgency around that. There's a sense of urgency for good reason because what if the competitive advantage that you need comes from this?
So, everybody has to explore. But then the risk is from experimentation that becomes latent and sort of not engaged anymore.
So, to your point, the side channel attack is really what I see the biggest vulnerability is that you have someone who can't access data one way, but they can through the side channel of this chatbot or this other technology that's been created.
And then what if that technology is not being maintained?
So, there's a tendency with a lot of AI technologies to deploy them.
And then it's like, well, we got it to a certain level of sophistication.
Sure. And we're just going to sort of leave it there.
But there's an ongoing evolving of landscape and capabilities behind the scenes that we need to nurture and reinvest in to make sure that the data, to your point, is governed, that it's protected, and that companies aren't fighting.
It's not a different way for it to leak out. What is this trajectory that you both see in terms of AI threats and AI attacks over the next maybe 18 to 24 months?
Where do you think this is headed? Again, we see a lot of increase, but is that going to go in – what direction is it going to go in from your perspective, Daniel?
What we see today is a lot of the more sophisticated social attacks. We see, obviously, data events, right, data leakage, data prevention considerations, things like that.
I think, unfortunately, as we look forward with this kind of proliferation of the use of agents, you're going to have a lot of agents provisioned into the world that are not foundationally secure.
A lot of teams are experimenting on how to properly get them out there.
I do believe we will see more live attack scenarios where those agents are being commandeered and asked to be used in nefarious ways.
I think that's one thing that, from a cyber defense perspective, we really want to focus on is, as we go to move into the agentic world, knowing that we're not going to get it right the first time, how do we have the proper kill switches and things like that to be able to reign that in, so as we monitor and detect problems, we can respond accordingly?
I think there's a metric that we published that I think is really interesting around that companies that actually measure the number of APIs that they publish, we found that they actually have 33% more APIs published than they knew that they had.
I like that because a lot of companies don't have any cataloging on their APIs, so we don't even have a benchmark on companies that don't have that.
Thinking about the data exposure through that channel and then how that's being leveraged in other ways, now is the time to really start to think about how do we know our data is safe and how are we ensuring that we're wrapping the correct protections around it, like you were saying?
Then the next piece is always around investing in the human capital side of things.
Again, going back to my statement earlier that we have to think about our employees when they leave the workplace, too.
Are they safe? Do they think about things the right way?
Do their kids think about things the right way? Sure.
Because it's something that we have to patch all through the humanitarian ecosystem, quite frankly.
I think that's a really, really important point. A lot of times we forget that, that people have digital lives outside of work and the same protections are needed, same thought processes needed.
In fact, for a lot of workers, they may have a different mindset at work and they may have a slightly different mindset when they're using their personal devices, so that's a good point.
Let's move on to maybe rapid-fire questions.
Mike, we can start with you. If you had one word to describe the current state of cybersecurity, what would it be?
I would say probably flux.
Flux. Okay. It's in a state of flux. In the same way that the rest of the world has to react to how is artificial intelligence changing the landscape of our businesses, cybersecurity is different.
Again, it goes back to that focused attack versus companies are trying to spend money on innovation and protecting themselves and we have a lot more goals to try to achieve.
The bad guys have a lot fewer goals, so we do have a battle of budgets. I predict this organized crime sort of boardroom where they're deciding how to spend their money.
It's not the guy in the hoodie anymore at all, by a long shot. It's very sophisticated.
As businesses, we need to take it seriously. I think there's safety in numbers as well.
The more we work with our security partners through the products that we buy and the companies that we engage with, if we have true partnership, it's really important for companies to seek partnership from their vendors and get that true value out of what do we know together that helps all of us.
Got it. Got it. Daniel, one word. I would say enabler. We have this tremendous opportunity.
Organizations want to use AI. Let's help them use it securely and then let's use AI to reinvent security to be able to do that.
I think it's time to enable.
What is the one buzzword, and we're at RSA here, in security or tech that you wish we could retire?
Daniel. Digital. Digital. I was going to say digital transformation, so I can just say transformation now.
I think it's the ambiguity, though, that makes it hard.
We don't live in an analog world. No, exactly.
It's a nebulous word that could mean anything to anybody. Completely agree.
Okay. Mike, what's the biggest obstacle in managing risk today in your view?
I really think it boils down to dollar impact is how I'm talking about it more and more in public.
It's like how does every dollar that I spend to protect the business having an impact versus just feeling like it?
The challenge with information security a lot of times is that compliance feels like security, but it's not.
True security has to be a different effort. Compliance comes from good security.
Security doesn't come from compliance. And so we have to think a lot more about what is the impact of every dollar I'm spending on security right now.
Great answer.
Love it. And by the way, those days are gone where you could just walk in and get whatever you wanted, so those are long gone.
Daniel? I think complexity to a lot of the points that we just hit on.
There's things you have to do.
There's things that you want to do. There's things that you know you needed to do 10 years ago like data governance.
Being able to manage that, juggle that with constrained resources is very complex.
Got it. Fast forward, complete this sentence.
In five years, and Daniel you can start here, corporate leaders will need to be fluent in X.
AI agentic management.
Maybe that's not the right word, but how do you actually manage an agent as a human?
I think managing a machine is the new. In five years though?
Maybe sooner, but I think in five years you better be fluent in it. Fair enough.
Fair enough. I get that. Mike? I would say that it's really around what is the model of your business from a data perspective.
What is the data that contributes to the model of your business?
I like to think of AlphaGo. AlphaGo was originally trained on every game ever played of Go, but AlphaGo beat the world champion after it played itself infinitely.
And I see this world where we're trying to understand what is the real model of our business?
How do we model it so we can do what-if scenarios?
I think longer term, it's about leveraging AI to run what-if scenarios and to be able to develop from the abstract data that we have, what is the actual model?
What does our business run like? And what's the closest approximation I can get to gamifying my own business and understanding iterations and how I accelerate and make things more relevant?
Sure.
That reminds me of Deloitte research that said that about 40% of technology leaders are saying they're contributing to revenue or revenue generation for their companies through AI data or tech capabilities.
And another 20% plan to do so in the next 18 months.
So not knowing your business is going to be a big obstacle as you start to think about revenue generation through technologies.
Completely agree there.
Okay. Last question. What one book, podcast, or show that is surprisingly relevant to how you think about cybersecurity or leadership?
Daniel.
It's very trendy right now, but Suits. I'll use that. So you get through a whole episode, you've beaten one villain, you sit back in your chair, and then your phone rings.
Love it. That's the excitement of security. Completely agree. For me, it's still this impactful book called A More Beautiful Question by Warren Berger.
And it's really about how in a knowledge economy where knowledge is essentially free and readily available, that insight and innovation is driven by being able to ask the right question.
Are we asking the right question or not? Great, great.
And I think it becomes more and more important as we accelerate in the threats that we face, the challenges that we have, just kind of having that focus is going to definitely be helpful.
Well, thank you both. This has been a phenomenal conversation.
I definitely learned a lot. Hopefully, the viewers will learn a lot from this as well.
I do want to talk about a couple of upcoming episodes for this podcast.
The perimeter problem, where we talk about how the perimeter is no longer the perimeter, and that's rapidly shifting.
The resilience reimagined. I think in the context of what we just talked about, the threat landscape is changing, the compliance landscape is changing, regulatory landscape is changing.
And as a result, you've got to rethink and reimagine how resilience is done in the context of even geopolitical changes that are happening as well.
So much broader topic.
And then, of course, the post-quantum conversation, which a lot of companies are in the last three months.
We've had a lot of questions and conversations on that as well.
And then Agile security. Those are the upcoming episodes, and we'd love to have you watch those as well.
Thank you for joining me both.
This has been a pleasure and great learning experience. Thanks for having us. Thanks so much.