In this episode, host and Cloudflare Field CIO Khalid Kark sits down with Jonathan Jaffe, CISO of Lemonade, to discuss a paradigm shift in cybersecurity: the move from traditional "backup-and-recovery" resilience to agentic security.
As AI-enabled threats become more relentless, Jaffe argues that human-centric security operations can no longer keep pace. He explains how Lemonade is pioneering the use of "platoons" of autonomous AI agents to handle everything from real-time threat intelligence ingestion to automated code reviews and vulnerability patching.
Khalid Kark is a globally recognized technology strategist and Field CIO at Cloudflare, where he works closely with C-suite leaders and board members to shape secure, scalable, and resilient digital strategies. With over two decades of experience at the forefront of technology leadership, Khalid helps organizations navigate the complex intersection of business innovation, cybersecurity, and enterprise transformation. Previously, Khalid led Forrester’s Security & Risk and Technology Leadership practices and served as Global Managing Director of Deloitte’s Technology Leadership Program and chaired Deloitte’s Tech Eminence Council to elevate thought leadership in AI, cybersecurity, and digital innovation.
English
Transcript (Beta)
You should be hiring or training your current people to learn how to develop agents, agentic technology.
Hey everybody, this is Khalid Kark.
We're here with Jonathan Jaffe. He is the CISO of Lemonade.
Welcome, Jonathan, to our second season of the Signal podcast. It's a pleasure to have you here and we're looking forward to a great conversation.
Yeah, likewise.
Thanks, Khalid. Well, let's jump right in. The first thing that it would be great to get your perspective on is over the years, we've seen the word resilience being thrown around all over the place.
And over the last, specifically the last couple of years, AI has really changed how the threat vectors are impacting even traditional companies.
And so how do you think about resilience in this current era where AI enabled threats are just relentless?
And we're starting to see the traditional notion of resilience as, hey, I've got a backup and that's good enough, really going out of the window.
And so we'd love to get your perspective on how do you think about resilience in this current era where AI enabled threats are just relentless on companies?
Resilience in the age of AI. You know, when I think of resilience, in my mind I define it this way.
A street version or definition of resilience for me is if I keep getting slapped around, can I continue to stand up?
Or if I get knocked down, do I get back up? In the age of AI, while I don't feel like I've seen a lot of the agentic attacks that the news is talking about being forthcoming, I have no doubt that those things are going to happen.
I'm sure I've seen some phishing -based attacks that are, in fact, I know I have it.
But it's coming. It's coming in forms that I probably don't yet understand.
The way that we're approaching resilience in this age is primarily by building agentic security.
That is Lemonade's approach. We can get into it a little bit more, but the short answer is building our own agents to handle security at the scale of AI.
So that's interesting, and especially for CISOs and security leaders that are in a very traditional environment to think about agentic security as kind of a core tenant of resilience.
And so if you're thinking about the notion of resilience as agentic, how does that change the thinking around your architecture, for example, or even your leadership team?
How do you convey that to them, that it's no longer about kind of more central, one-off resilience things, but agentic being kind of focused on individual, specific areas of threat?
I don't worry about trying to convince my management, because they're on board with the idea of adopting generative AI in all forms throughout the company.
But our approach is to build agents that are constantly doing work for you.
I can give you a couple of examples.
So rather than writing detections looking for attacks that might be agentic, to me that's not very interesting.
It's writing agents that are reading threat intelligence feeds regularly, talking to other agents that we've built that will take the indicators of compromise and build detections, and then push those detections out as code in a near real-time fashion.
Our vision is when I go to sleep at night or the team is not working, our agents are actually building detections.
They're agentically building detections based upon the latest news article from the feeds we get.
That's an example of using agentic technology to build in resilience, because it's always working for you and improving your system, or at least meeting the latest demands that are coming at you from attacks.
You mentioned something really interesting, which is this notion of proactivity around what you're doing.
And so it's not about waiting for things to come to you. You're out there seeking, based on whatever threat intelligence you're getting, where the landscape is headed, and then have automated ways of not just understanding, but putting controls around it.
Now the question becomes, a lot of security leaders that I talk to think about this and say, how do I create governance around it?
How do I know this is going to be the right set of controls that I put in there in response to the threats that are being detected, etc.?
What kind of guardrails do you have?
What are you putting in place to ensure that you're not overly protecting or under-protecting your organization in some ways?
I think some of the controls that you'll have to build over time as the technology develops and your teams improve their skills are going to be monitoring controls that you've got.
You've got a list of controls, for example, segregation of duties.
Another part of the team, say your development team, builds a way that code is written by agents rather than people.
Your agents need to be aware that code is being checked in by an agent. And because you require segregation of duties, your agent is going to kick off an agentic review of that code.
So not the agent that wrote the code, you kick off an agent. It might be a security agent that actually does the review of the code and then will submit a PR back to the first agent.
Those are examples of how you can build agents that monitor your controls and then look at new areas where you're doing work and make sure that those agents or people are following the controls.
How are you thinking about building your team as a result of that?
I mean, I think this is a fairly new type of thinking that you would need to instill in terms of the process.
Who's going to do what?
How's the handoff between agents and humans and all of that? I'd love to get your perspective on that.
I advise my peers that their goal should be everything in security should be automated.
And it's not that you're going to get there in the next few years, if ever.
But if you do that, then rather than hiring people to be analysts and to look at alerts or to be SOC people or AppSec people who are reviewing code, you should be hiring or training your current people to learn how to develop agents, agentic technology.
We've done this at Lemonade. You should give your team a minimum five hours a week requirement to learn.
I won't name a specific technology, but learn some agentic technology out there.
And not only will you end up with a team that's developing security solutions rather than responding to alerts, you're going to inspire them to step up and stay relevant in the new world we're entering.
And you're going to give them motivation to work harder.
And they'll still feel like they've got job prospects because they're learning the latest, most important technology.
They're not stuck clicking false positives for half of their day.
And so I think what you're describing is the really exciting part of any cybersecurity professional's job, which is to actually be making adjustments, making changes, making tweets, not really just reviewing stuff.
In terms of threat intel, how do you think about threat intel? Where are you thinking?
I mean, of course, every cybersecurity leader has multiple feeds that they're taking, etc.
But some of them, again, may be dated. Some of them may not be as relevant, etc.
And internally, what we are starting to see is that certain industries, even certain companies, are being individually targeted, whether that's bot attacks or others, etc.
Are you seeing some of that? And how do you envision your threat intel process change as a result of the whole shift to agentic as a mindset?
I don't know if I'm seeing targeted attacks ourselves.
I can't say that they're not there. I don't know if I'm seeing it. But our approach is to pull information from the feeds that we like and the sources we like, and that's all over the place.
But because we now do it agentically, it's not that a human has to review it and you have a limited number of news articles you can read a day.
But rather, your agents now, you give a list of sources for them to follow, and your agents are reading it as soon as these things are published.
Now you have other agents.
Some agents are ingesting your sources, but then they're passing out the result.
They're passing over the results of that to other agents that know about your environment.
And then those agents that know about your environment or a third agent can say, ah, you know what's coming in now about Microsoft doesn't relate to us because we don't use any Microsoft.
Or this particular NPM package, I just check with my cloud security agent.
We don't have that.
I can dismiss that. In that case, have you seen or experienced a reduction in the volume of things that you are putting in your environment in terms of mitigations or controls and so on?
Because again, I know a lot of companies just outright as soon as something comes up, kind of putting patches, et cetera.
And so how has that changed your.
Yeah. So good. Yeah. Some agents that we've written actually try to look for actual exploitability.
So rather than having a proxy for security by saying a CVE over seven has to be fixed.
We actually now have agents that say look at Dependabots and they look at the CVEs and they actually look in the code to see if the method that's vulnerable in that dependency is actually being called.
And if not, we just dismiss it. So what we're seeing is a great reduction in wasting developers time fixing stuff that's simply not exploitable because now we can figure it out.
We're not perfect yet, but that's the direction we're moving in.
And we've made great progress. That's phenomenal. Can you talk a little bit about the volume of agents?
I mean, what are we talking about in terms of size and scale and volume of the activity that you have?
And also, if you could comment on where do you see that going?
Is this going to exponentially increase?
Do you think that based on what you've done so far, you've got a pretty good handle on a lot of the threat vectors that are currently kind of in your environment and so on?
So I think it will continue. The creation of agents for us will continue to increase rapidly, probably nonlinearly for a while.
And then I see us hitting a homeostasis.
It won't take long. We don't want to grow forever in terms of building security agents.
Not at all. How many agents do you create, let's say, in a month or last six months?
What's the volume of the size and scale of the agents that you're creating?
It's really not that great. One, because we've only been at it for about a year.
I think we probably have about three platoons of agents that are focused on specific tasks.
We've got some that are doing threat intel and then others that are doing code analysis.
And we're starting to build other ones that are doing environmental analysis, like looking at our cloud environment, saying, okay, I see that you've got this sort of environment.
Let me see if I can kick off an AI pen test, which we don't write ourselves.
We have another company we use for that.
And feed this other company a prospective pen test plan.
Let it run the results. So the answer to the question is we have about three platoons of agents, and each platoon is anywhere from two to maybe six individual soldiers in that platoon.
You mentioned something which kind of triggered another thought around third parties.
Of course, there's a whole host of risks that is introduced in your environment with third parties and especially the SaaS providers and what they bring to the table, etc.
How do you think about that?
How do you manage that? Is that something that, from an agentic perspective, you've been able to kind of drive some mitigations around and build some capabilities around?
We do that with other third parties, not surprisingly. So we don't build anything that examines third or fourth parties ourselves.
I prefer using startups for this because they have the most interesting solutions and you get the best service.
But depending upon the area, we use a couple of different startup vendors to look at third parties and third party risk.
Let's talk a little bit about really going forward, the future kind of notion of where do you anticipate you'd be spending a lot of your time, let's say six months, a year from now, in terms of either mitigations or really building your organization's capabilities, not just around cybersecurity, but actually training the rest of your workforce around cybersecurity components.
And so one of the things that we've seen over the years is that cybersecurity is something that has to be built in.
You can't bolt it on. And in that case, the whole organization in some ways needs to be engaged, involved, really drive some kind of a mindset shift.
Again, you may be in a slightly different environment because you're fast moving, you're digital first, all of that.
Are you experiencing some of that shift in the workforce?
How do you think about the workforce and educating them around these notions of cybersecurity and embedding that into their day-to-day work?
I think at this point, the answer of we use generative AI won't surprise you. So part of our threat intel feeds include, if there's threat intel that relates to the workforce, we have an agent that will write up a cute, quirky message and publish it in Slack to the security announcements channel to notify people.
So rather than them getting once a month video training that we do and the regular phishing simulations that we do every month now, or in addition to that, now they're actually getting relevant messages to what's trending.
And it's generated semi-automatically, meaning threat intel comes in, a message is written, I actually have to review it, and then I publish it in Slack.
I think in a short while, we'll allow it to automatically publish stuff, but that's how we try and keep the workforce pretty security minded.
And how has the reaction, what has been the reaction?
Have you seen a shift in the way people are starting to think about it? Again, if it's going to be, yeah, go ahead.
Yeah, I'm not sure. I don't have a way to measure that yet.
I think one of the downsides of having AI generate these messages is, honestly, they're just not as funny or interesting as if you write it.
And they all kind of sound like the same level of humor.
And I don't have a way to measure people's, their resilience with regards to security.
I don't get as many thumbs up and emojis for these messages, maybe because they're coming now to a day instead of for a month.
But check with me in a year and I'll see if we're more resilient.
I don't know. And I think that's the other thing, right? So how often, and are you being thoughtful about how often those messages are being generated?
And is that creating a little bit of a fatigue from a user perspective? Yeah, if you hear the same person crying wolf all the time, you go on to something else, right?
That's always a risk. If you had to think about your role as a chief cybersecurity officer in a company that's on the leading edge, any advice for your peers as you think about the role, the evolution of the role?
We talked about agents doing a lot of the work that traditionally security analysts did, etc.
How do you think about your organization, your skill sets, your capabilities?
What learnings would you share with your peers in terms of really what are some of the things that they should be thinking about going forward from a governance, from an organization, from a leadership perspective?
I really think that if you want to have a good security team, you need to motivate your people.
You need to look at what burns them out, which I think is dealing with alerts and dealing with investigations that end nowhere, and inspire them by letting them play with the latest technology and getting out of the way.
If they're decent, curious security people, they're going to want to play with new tech, and they're going to want to implement it in your system, and they're going to come up with good solutions.
I think your job as somebody who's been doing it for at least 10 years more than the people who work for you is to just give them some direction and then step out of their way.
Motivate them, trust their decisions with guidance, and then let them try and solve the problems.
Yeah, you mentioned something really interesting. A few years back, we did some research, and we found that more than compensation, more than anything else for a tech person, it's actually staying on the leading edge of tech that motivates them to stay anywhere.
Right. It's unsurprising. I mean, there are two reasons.
One, it's more interesting, and the other is because it keeps you marketable.
Are you getting involved in things that may not be traditionally part of cybersecurity?
Are you taking on more, or is the threat landscape enough to keep you busy, at least for right now?
I'm not taking on more. My role is still defined well within what it was when I started almost six years ago.
But just because I'm in a growing organization, there's more security work.
But one thing I'll add, though, is that with automation, I haven't had to increase the size of my relatively small security team of just six people in three years.
We do a lot more, and we handle much more in terms of making sure the company is secure.
And we have a better security posture than I think 95% of the companies out there, just looking objectively at what our CSPM and other tools tell us.
And yet, we haven't had to grow the security team because they've become security engineers, not security analysts.
Jonathan, this has been great. For a lot of our podcasts, we do lightning round questions, and would love to get a very quick, instinctive reaction from you on a lot of these questions.
And so let's jump right in, unless… I can't do the Sam Harris version where I still go on for 15 sentences?
Are you good with the lightning round questions?
You've read them? Sure, I'll give it a shot if I haven't. Yeah, I'll give it a shot.
We'll see. Okay, all right. Well, I think usually gut reaction is great.
So let's jump into the lightning round. One word answer. If you had to describe the current state of cybersecurity in one word, what would it be?
Ulcerating. Do you want to explain why? Yeah, for me, just with the rapid increasing proliferation, or rapidly increasing proliferation, both of those things, of agentic tools beyond technical people, it makes it much more difficult to contain the risk because now everybody has access to these really powerful things at an increasing speed.
So it's pretty unnerving. It's tough as a security practitioner now.
Absolutely is. And I see that all the time as we engage with cybersecurity professionals across industries.
And again, it used to be certain industries were targeted a lot more.
It's now pretty consistent all over the place.
All right, next one. Most overused buzzword. What's the one word that you think that security or broadly tech that you wish we could retire?
AI. Yeah, well, AI specifically, because it can mean a hundred different things to a hundred different people.
Without any context, it doesn't make any sense. But yeah, completely agree with that.
This is the next one. What's the biggest obstacle in managing risk today from your perspective?
Again, it could be, I mean, there's options are complacency or scale or complexity or something else.
Speed. Speed.
Explain. What I alluded to before, which is the speed of adoption of riskier and riskier tech that we don't yet have solutions for.
Agents and agentic things running on everyone's laptop and then SaaS services.
It's hard to keep up with the speed of adoption.
And the speed of how quickly businesses are changing as a result of it.
Right. And so, yeah, you're all almost always trying to keep up with what's going on from that front as well.
Let's talk a little bit about looking ahead.
Complete the sentence. In five years, corporate leaders will need to be fluent in.
Well, it's more than one word, agentic development. Okay. And you think in five years, where would they be in five years?
A lot of the leaders, do you think that they'd be pretty advanced or do you think there's still be learning?
It depends on how you define a leader.
I think larger companies are always laggards and I don't think of them as leaders.
I think the mid-sized companies in five years will have adopted a fully agentic development program where agents are building all applications, they're building products, they're designing product.
And then the people are there to give it shape and to interact with these things using natural language rather than coding or using design tools.
And I think in five years, I think leaders of all sorts will think of this as natural.
No, that's fair.
And I think I agree with you. If you're not doing that, you're going to be left behind several years ago, right?
Because things are moving so fast and quickly that you won't be a leader by then if you're not embracing agentic ways of engaging and driving.
Last question, last lightning round question. Talk about an interesting show or a podcast or a book that you read recently.
In my top three favorite podcast series, it's Sam Harris, Making Sense.
Making Sense. Yep, that's a great one.
What do you think that is interesting for you? It reminds me that rational, calm discourse and deep thought and intellectual curiosity can be interesting, even if they're topics that are talked about that you may not know about.
Calm intellectual discussions can still be listened to and appreciated over people with bullhorns.
Yes, especially these days when you don't hear a lot of that.
So, yes, I completely agree. Thank you very much, Jonathan. This was a phenomenal conversation.
Good luck with everything that you're doing. And we appreciate you being on the leading edge for all of the other cybersecurity professionals and wish you success in what you're doing.
And looking forward to seeing more of you and engaging more on these topics with you.
Khalid, thank you very much.
It was an honor and a pleasure. Thank you.