Human capital, AI ambitions
Presented by: Trey Guinn, Lior Gross
Originally aired on January 14 @ 9:00 AM - 9:30 AM EST
In this episode, Cloudflare Field CTO Trey Guinn speaks with Lior Gross, CTO at Caliente.mx, about how organizations are navigating AI integration; starting not with tooling, but with talent. Together, they explore the findings of Cloudflare’s App Innovation Report, including why 87% of organizations believe they already have the right people and structures to succeed with AI. The conversation dives deep into questions like: Should you hire AI specialists or train existing teams?
- How do different types of businesses, AI-native vs. AI-augmented, approach adoption?
- Does infrastructure need to be modernized before leveraging AI?
- How can organizations responsibly build AI fluency across teams?
Packed with firsthand experience and thoughtful guidance, this episode offers a grounded perspective on how to develop real AI competence, not just chase the hype.
📘 Don't miss the full Cloudflare App Innovation Report for more insights.
English
Transcript (Beta)
You'll start by experimenting and experimenting with a more modern technology is always easier.
Hello, my name is Trey Guinn, Field CTO at Cloudflare.
I'm here with Lior Gross, CTO from Caliente, to talk about human capital and AI ambitions.
How organizations are really trying to take AI to drive their application development and their organizations forward.
We actually did a survey of organizations and found out that 87% of organizations believe that they already have the talent and the organizational structures that will make them successful.
And we're going to dive into that and more insights from Innovation Report.
And we're going to look into what it is that makes organizations successful with AI and building applications.
So Lior, really excited to have your perspective on this. So we're going to dive into that sort of first stat, this idea that 87% of organizations think they've got the right people and they've got the right talent to build new applications.
There's this big debate sometimes about whether or not you need specialized sort of AI development teams versus can your existing teams do that?
What do you think about this?
Well, first of all, thank you for having me. Thank you for the invitation.
I think first of all, before we talk about questions, the first question an organization should ask itself is how are they approaching the AI subject?
And if they're taking a look, if we're taking a look at the AI, in my opinion, there are many ways to approach the subject.
But when a company looks at AI, now the question is, are we going to use AI as a foundation or the basis of our product?
Let's say, for example, let's take a new startup, for example. The technology that startup is trying to push out to the market, is it something that can be improved or completely changed using AI?
And on the other hand, you have companies that they have certain technology or they're a tech company, but they do not need to just put the AI title on what they do.
They can use AI in their tool set and they can use it in various places.
So the question, the first question would be, what type of company are you?
Yeah. This sort of question of, are you an AI product company?
Are you building literally a service that is AI sort of packaged up, or are you maybe a more traditional enterprise that you're trying to leverage AI to make your teams and your organizations more productive?
I guess in both of those cases, we think about how do you structure an organization and do they have the talent?
Yeah. How do you think about it in those two, like modern versus legacy?
It's not necessarily legacy. It could be just a company that in their point of view, and it's fine.
They do not need to push out an AI driven technology, but they could use AI in their tool set.
So if we're talking about those companies, I think if, I don't know if those are the 87% that you have bought into that question, but I would say that they really do not need necessarily staffing and hiring new talent to drive that, to introduce that tool to their tool set.
Not necessarily. Obviously every organization is different. There are more lean organizations, their organization with huge IT teams.
So it really depends. But if we're talking about the other kind of companies where they will, or they want to introduce AI to their technology and improve it or change it, in that case, it might be a bit, maybe more difficult to find talent specifically for AI.
I want to give an example.
The iPhone was introduced in 2007. I think it was 2007, around that time.
And I remember that 2000, about a year and a half later, the iOS was open to developers and Xcode was introduced before for macOS, but only about a year, a year and a half later, developers could actually use Xcode to develop for iOS.
And companies were looking for talent and they were saying, we need a minimum five years of experience developing for it.
I was like, you look at that, you're asking the questions, like, I don't know if it's really the HR department who put that five years there, or actually someone thought that that's what they need.
So I think when looking at the AI, and it really also depends on the phase of the product and the technology that they already have.
So if the company says, for example, you know, we want to introduce AI, but we do not want to go even like postpone by a day, or a launch date, or the update to the software, whatever it might be, we don't want to disturb the current process.
We want to experiment, we want to try, but we don't want to disturb the current process.
So yeah, you would need to hire new people.
If you're looking into proving again, the internal process, and you want to take the existing knowledge you have, then you might be good enough and say, but there's downsides and upsides for every approach you take.
So if you hire a new team, you have to also teach them everything the existing team knows.
And in my experience, starting as a programmer, as an engineer, and today as a CTO, as a manager, that knowledge is, even if you hire a new programmer, the new developer, and they sit side by side with the existing ones, it's not like the knowledge is just directly transferred to the other person, right?
It's a curve, it's a learning curve, and it takes time.
So the downside of hiring new people is that they're not, at the moment, skilled on your technology, your product, your history as a company, but obviously the upside is your potential, you're not disturbing the current process, the current timelines that maybe you promised to investors, and vice versa will be the downside for the other one.
If you do go with telling your team, okay guys, let's introduce AI, let's see how we do it, they need to learn, they need to get themselves skilled enough with AI to start implementing that.
So it will take longer, it will not necessarily take longer, but it will disturb your current timeline and you need to know that as management.
So there is this sort of dichotomy you have to deal with, does someone know the organization and know your business versus do they know the new technology and you're trying to balance, I guess, as you think about bringing AI sort of competence into the organization, do you want to optimize for AI skill sets or do you want to optimize for people that know the organization and the business?
Yeah, it's a really interesting thing to think about where an organization is and what's more important.
Do you have any feel of like if you had to, if someone came to you and where should we be on that spectrum, how do they think about that?
I would say, first of all, do not go to the AI approach just because it's the current hype and everyone is doing that.
So ask yourself, do we really want or need to implement AI in our technology, in our process, in our business?
Again, looking at those two options, if we're talking about the previous conversation we had about, is it a tool in my tool set or is my go-to push the technology put it inside our processes, our product, our technology?
So first answer that question. Yeah. And then the second one would be, again, do not just go to your teams and push AI and say, I want AI in my product, go find a way to use it.
That's not the way to do it. So I think from my point of view, if going back also to the example I gave about iOS, even if you do want to hire new talent, just think about the fact that there's not many people out there with real experience with AI.
And even if they do have, it's not for a long period of time.
So the second question, or the second question and answer you need to ask yourself is, where am I when it comes to AI?
Do I want to keep the current timelines?
Or I do have the flexibility of, let's pause for a second.
Let's try to introduce AI. We can postpone the launch by a month, by two months, might be more, but it's worth it with AI.
Or I still need to go to market by that deadline, but I want to explore AI.
So it's more kind of a business question, though I do tend as personally, and it's always my tendency, is to use the current talent one has in the company, because I think the knowledge gained through the years or month or days working at the company is more valuable than any outside knowledge that can come in and maybe disturb also the balance of things.
And that's aside from sometimes, you know, teams might be a bit afraid.
You know, that's a small side point, but sometimes you might have the team, the tech team would be like, wait a second, why are you hiring a new team there, working with this AI and you're pushing to the AI direction?
What does it mean for me? I mean, is my career in jeopardy?
As management, you might not think about those things, but you might see that in people.
And I had certain cases, it wasn't really related to AI specifically, but where other teams were hired, employees came to me like, what does it mean for us?
Are we going to be out like shown the way out in a few months?
This is a great point. When you think about evolving talent and organizational structures around, you know, having competence in AI, you have to not undervalue awareness of the business and context of the business.
And sometimes maybe that's a harder thing to teach than to have the AI skill set.
Changing gears a little bit, we're thinking about infrastructure modernization.
You know, we talked about, you know, those sort of two kinds of businesses.
You maybe have a modern organization that is shipping an AI product.
Maybe you have a more traditional organization, but a lot of organizations have this thing called tech debt.
We're all very familiar.
They have things that have been in place for a very long time. And one of the other things in our survey, which was super interesting to me, was that nearly half organizations thought that they didn't really need to modernize their application stack before they could start bringing AI in.
What do you think of that?
You know, what do you think is the motivating that? I think this answer comes more from the management or business side, less from the technology side.
I think, and I hope, because I hope you won't ask a tech person this question and they'll tell you, no, let's start with our old stack and see where it goes.
I'm pretty sure this answer comes more from the business and management side of things, where they would like to see AI introduced.
And the priority is more like, for example, maybe they have stakeholders that tell them, wait a second, we want you to integrate AI into this technology.
It's the new thing and we have to get it or we'll pull our funds, whatever.
An example, right? But it could be, for me, it sounds like more a push from a less tech-savvy side of the organization.
Yeah, so you can get that. You get sometimes AI is the hammer, so everything looks like a nail.
But I guess if you think about the organizations and folks in the community you've spoken with, when you look at different infrastructures that exist, obviously some folks are still in the world of on-prem and VM and other folks are at the other end of the spectrum with sort of full serverless.
Do you find that there's like a baseline or some capability that organizations need to get to from a modernization perspective to be able to effectively build AI applications?
If you're talking, for example, just take the on-prem approach, some organization will decide even to not change that.
But if they do want to push for AI with that technology, it means they actually have to go and buy GPUs out there.
Yeah. Right? And that's not a cheap thing, but again, it's a decision for that company.
It's easier, more scalable to do it serverless or maybe with VMs as well.
But it will be the right approach, in my opinion, will be to use serverless also for flexibility.
And price-wise, at least in the beginning, if you're experimenting and you're trying those things, imagine being on-prem, spending thousands of dollars on GPUs and then ending up not using them.
Well, testing them, trying whatever AI you're trying to integrate.
I don't want to say fail, or I'd say maybe more like deciding that it's not worth the investment, but you already made the investment.
So if you're more flexible, you're more modern with your technology, the experiment, because you'll start with experimenting, there's no way to avoid it.
Yeah. You can try to avoid it, but it's not recommended. You'll start by experimenting and experimenting with a more modern technology is always easier.
You're taking away for all the pain points that you might have just by having, by using, doing this experiment will be faster.
You'll get the results that you want way faster than having your old technology and trying to just first, just fix all the pain points.
Wait a second. Now I need to update my database.
Now I need to get a new version of Postgres or whatever it is you're using.
Wait a second. It's not scalable. I cannot install the drivers needed for using the GPU.
Attacking all that or avoiding all that is more correct. You'll avoid all that pain points if you are more modern with your stack.
Okay. Oftentimes we hear from technical leaders, business leaders, where do I get started with AI?
What's the application I should, the killer app I should build first?
What do you think about that approach where you're trying to figure out what to start with first?
I'll say that I'll go different. I've heard a lot of the same, but a lot of times it comes like a push from management.
We want to see AI. I want you guys to use AI.
I think it's wrong. Because basically they're saying, here's the solution.
Now go find a problem. Right. It's all the way around. So again, AI can be a tool in your tool set.
And I think this applies more to this case. And I would say that, just remember, you have that tool in your tool set.
And the next time you encounter a problem, ask yourself, can I improve or change or solve the problem with AI?
That's a great point. You have this sort of tool in your toolbox, but of course, you need your builders to be competent with that tool.
And so maybe that's the better question to ask is how does an organization start to build competence with AI?
Should they wait till their first big project? Should they do some experimentation?
I don't know if you have any tips that you'd recommend for leaders watching this.
I'd recommend to start experimenting already. The tool exists.
AI is out there. There are different ways to use it. You can pay for it.
You could use free. You can do it yourself. I would say whatever it is, decide internally.
If and when we will want to use AI, how will we want to use it and start experimenting with it?
So when your finance team comes to you and say, we have an issue, we need to make this, we're under an audit and we thought maybe AI can help us.
Well, if you're there and you did never experiment with AI inside your organization, then you probably won't be able to implement a solution using AI to that problem.
But if you already experimented, then you already have that tool, you know how to use it.
Maybe not apply it specifically to that problem, but it will be way easier for you to apply that and just tweak around it and find the right approach.
But that would be my recommendation. Start experimenting. Start taking the decisions that you would like to make because sometimes when AI just started, I remember that a lot of organizations were simply blocking it.
Yeah.
Like all the famous big ones, ChatGP, Google, they were all blocked by organization internally.
You could not use them. And they would say less than like big emails, scary titles, like do not use those tools.
And I think it was wrong. Back then, it's wrong today.
But at least take the decision of how are you going to use it and tell your teams, it's not a top one priority.
We don't want to push AI down your throats, but we want you to start getting familiar with it and start having a little bit of knowledge.
So when the issue does come, we can use that. That's a really good call.
So start to dip your toe in the water. So you have some organizational competence.
You know that when the right problem comes along that AI is a solution for, you can dive in.
I guess the other thing I would ask is, if you're an organization, oftentimes you only know your own world.
If you're a technology leader in one of those organizations, how would you help them sort of self-evaluate how competent they are?
Are there other things you look at in your own teams where you know that, oh, you know, we feel like we're in a good place right now.
I don't know, like measuring maturity, essentially. Well, if we're talking specifically about AI, I would tell them, first of all, do other teams, let's put aside the tech team, do they even know those tools exist?
Do they think about using them?
You might hear a lot of, let's say, more old school kind of people that might be just, you know, pushing away from it, no matter why.
Just the combination of those two letters comes up.
Like, I don't want to use it. First thing will be education.
Leave aside technology. Let people, tell people, teach people that it's not something to be afraid of.
A great example would be, I heard that companies were telling their graphic designers to avoid using all those AI tools to generate like different kind of images or logos.
Just stay away from it because of copyrights.
And they were like, really, they were blocking it just because of that.
And I said, you know what? It's not very much different than your graphic designers going on Google and finding an image and using it.
That's the same copyrights.
They know not to do that. Trust them enough that they won't do the other.
And I think it's more about, that's why I'm saying it's more about education. Tell people and you're afraid of them sharing information out there, you know, secret company data out there being leaked because it was someone uploaded it to some AI tool out there.
Teach them the repercussions. What is LLM? What are you doing by uploading the last year's revenue Excel sheet to Chajipiti, for example?
What are the repercussions of that?
So teach them education. Don't be afraid of it.
Be aware of how to use it. So I think that would be the first approach in seeing how mature is organization.
Are they talking about it or not? If you're talking about it, that's great.
Now, where are you afterwards? The second thing would be to look inside and see how many people are actually using AI for different things already that as an organization, you don't know, but they are.
And it happens talking about my team.
For example, I've learned that they've been using different tools, AI tools out there for code reviews or consulting with, and I'm happy about it.
Just, you know, be cautious about what you're feeding it because you're getting answers, but you're also teaching it.
That's LLM. You're teaching it something. So be aware of what you're doing.
Use it smartly. Great. And so are those some of the first things you look at, I guess, in a dev team that's trying to leverage AI?
Code review, code generation.
I don't know if there are other sort of key things you look at it from a competence perspective.
Well, first thing would be if you're a tech team, because they're obviously a bit more tech savvy, learn the basics of what is AI.
Because I would think that it also gives them to be interested in the matter, more invested in it.
So if you know what's, again, LLM for a large language model, what's a token, all those things, if they know it, they are more aware.
They can, if we're talking about even developing internally an AI mechanism for internal tools, once they are aware or they know, they learn a little bit the basics about AI.
And I think also as an organization, you're giving them some kind of a push and educating them further.
And as from the human resources point of view, they're happier.
Yeah, makes sense. Thank you so much, Lior. It's been a great conversation, understanding your perspective on how organizations can drive AI competence, how they can start dipping their toe in the water.
And I love your point about how important it is to have context on the business, not just technical competence with AI as a subject matter.
So as we move forward, I want to invite everyone here to follow up on upcoming episodes for Beyond the App Stack.
We also have a podcast you can subscribe to. And please be sure to download and read the Cloudflare App Innovation Report.
And thank you again for watching with us today.
