Cloudflare TV

Developer Speaker Series: Building AI Tools

Presented by Dani Grant, Dawn Parzych, Alex Volkov
Originally aired on 

In recent months we’ve seen an explosion of AI tools and products coming to market. Hear from Alex Volkov, Founder and CEO at Targum, and Dani Grant, CEO at on their journey as we discuss:

  • What problems their products solve for companies and developers.
  • What motivated them to build their products.
  • The role Cloudflare plays in their journey.
Developer Speaker Series

Transcript (Beta)

Hello, I am Dawn Parzych. I am the Director of Product Marketing for the Developer Platform at Cloudflare.

I'm very happy to be joined today by two founders that are building their own companies and using Cloudflare as a part of their journey.

We're going to talk about their journeys as well as AI. So I'm going to hand it over to them to introduce themselves and then we'll dive into some questions.

Dani, do you want to start? I'd love to. It's really special to be here. My career started at Cloudflare.

I joined as the fourth product manager at Cloudflare and got to see Cloudflare grow from 100 people to about a thousand.

And it was there that I met my now co-founder while working on the product strategy team.

The leader of that team, Dane, he has a way of inspiring his team to do more, to push faster, to see what's possible.

And we wanted to run so fast. But one of the things that we found would slow us down, even on one of the fastest moving teams, in one of the fastest moving companies, you know, in technology is all the miscommunication and back and forth about bugs and fixes between product and engineering.

And so we really wanted to fix that for us, but also for the industry.

And so that became jam. If you've ever reported a bug to engineering, you've probably had the experience that you create the JIRA.

You've got all the details there and the engineer opens a ticket.

They say it works fine on my end and they just cancel the ticket.

And so we built jam to make sure that never happens again.

Great. And Alex, how about you? Hey Dan, thanks for having me on. I have a background in full stack engineering.

I've been at the same startup for a decade. I was a Cloudflare customer.

I was the one trying to bring Cloudflare on for more of the stuff that we do.

And once I started my own journey, I found AI being very exciting.

I started with like stable diffusion, diffusion models, and then OpenAI released Whisper.

And I used Whisper for something like a personal project of mine to translate Zelensky on Twitter, because he posted like a statement and it wasn't Ukrainian.

I'm Ukrainian origin, but I don't speak it. And so it really bothered me.

I was like, hey, I can now fix this. Whisper just came out.

And so I posted this on Twitter and took me about five hours to build myself manually.

And then folks started really liking this. I could probably automate this.

So I sat for a weekend, automated this, and this kind of became my thing. is the company that kind of sprouted from that effort, where it started with like helping folks understand what's going on over there.

And then I had an issue with, okay, I have all these videos now, what do I do with them?

And it's like, oh, Cloudflare has a stream product.

So why wouldn't I build on Cloudflare?

And this is how I got into the Launchpad program in Cloudflare, where a bunch of startups can join Cloudflare, and Cloudflare helps them and also expose them to some VCs.

And so is now my company. It runs, it's in production, it runs on stream and workers.

And aside from that, I'm trying to get as much up-to -date about AI as much as possible.

So I'm doing some consulting and content creation around AI.

Oh, sorry. Go ahead. I think both of your stories are so interesting, because you built a product for a problem that you were both facing.

And to me, that is like such a sign of like good products and the products that do well.

It's like, I'm having this problem, and I want to fix it for other people so that they don't have this problem.

So, love that. Sorry, what were you going to say, Dani? I was just going to say, you know, I mean, like everyone in AI, we both came from building products, non-AI features, like traditional product development, and then to AI.

And I was curious to hear, Alex, your experience building AI features, how it differs from traditional features.

And I'd love to share what we found at Jam. We just shipped our first AI-powered feature, and it was quite different, the internal processes to ship it.

Most definitely. So there's a term running around kind of the industry called prompt engineering, right?

These large language models, there's a way for them to learn new concepts via in -context learning.

And so you can do some stuff in the prompt that you send them to get them to do different tasks.

That's one of the emergent properties of like large language models beyond the training.

And I don't love this concept as prompt engineering, because as a software engineer, this feels nothing like engineering.

This is almost, I call it spellcasting.

It's almost magic. You do some woo-woo, and then something magical happens on the other end.

And, you know, there's like a, between magic and psychology almost, because you have to understand kind of the internal processes.

You have to understand it's like segment prediction, and not necessarily there's a thinking thing there that you can talk to.

And that differs completely from the way I was raised, you know, building software that you can test, that you can run again with the same parameters, et cetera.

So there's a bunch of just exciting stuff there.

So that's just one piece. The other piece is the whole transformer architecture changed so much.

So the parts that we use, for example, Whisper, that's unlike the LMs.

That's, you know, definitely you get pretty much what you, the tasks that you want to get, you get out of it.

However, it's also, the huge jump in capabilities that we see from week to week is kind of staggering.

And it's also different because, you know, I grew up as a software front-end engineer.

So browsers update, and so new web technologies do release, and you have to stay up to date, but never this fast.

And never things changed as quickly as now. And so problems that were like unimaginable eight months ago now are like, oh, okay, yeah.

Oh, you want image segmentation? Yeah, just go to SAM. You want a full NLP for 99 languages?

Go to, you know, go to Whisper. So just keeping up to date is part of it, is a big part of it.

And also natural language programming or spellcasting is the big, big other part.

I'd love to hear if that's also your experience.

The keeping up to date is really fun. The prompt engineering bit I found really surprising.

When you build non-AI features, let's say you spend two weeks and you sort of are building the block stack by stack to get the feature together.

When you build an AI feature, you build the block in a day, and then the rest of the two weeks, you're like sculpting the block.

You're like unearthing Michelangelo's David from the marble or something.

And what that means internally is, you know, in traditional feature development, product and design would be involved before engineering.

So they would do all the planning. And then at the tail end, reviewing engineering's work.

But that doesn't work here because the feature needs to sort of be discovered and unearthed and sculpted.

And so what it means in practice is that product and design do a little bit of planning up front, but then they're working every single day with engineering, iterating, dogfooding, testing.

What it also meant for us is, well, for them to be able to do that effectively in a remote asynchronous environment across time zones is we needed an observability layer to the feature for them to be able to see what's actually being sent to the LLM and what's being returned versus what is the user actually experiencing.

And for them to be able to tweak that conversation in the background. Which usually for a feature, we don't have to build such a stack of admin tools for ourselves.

But for an AI feature, we totally did and it made a big difference. It's interesting that the processes for AI versus non-AI are so different.

But I guess being a new technology, these tools, these processes don't exist.

We're all very much learning as we go here and then sharing those learnings with other people so that we can improve upon that.

So aside from these challenges, what has been the most surprising thing to you about either launching a product or launching a product at a feature that are based on AI?

Dan, do you want to start? I think that one of the very surprising things is when you're just sort of listening on Twitter, people are talking about AI replacing various jobs.

Oh, AI can write code now. But the reality is building an AI feature, we really see that it's very limited to what it's very good at.

So it's very good at discrete problems with small context where it doesn't have to do a lot of reasoning itself.

It can sort of do a question answer.

So AI is really, really good at writing code within one file. So if you're using Terraform or any infrastructure as code, like any configs, it's great at that.

Or if you're doing like very basic web app, it's great at that. But applications are much more complex.

They're comprised of various services. They have large code bases, many repos, large infrastructure, and AI is not that good at that.

And so one of the surprises is kind of going from just someone sort of interested in the space, you hear a lot of like, you know, AI will replace all of our jobs to being internal.

And it's actually like, it's the humans who can use AI as a new sort of paintbrush to sculpt and bring it into the fold thoughtfully, with all their context and expertise and experience, you know, in their role that actually seem to be getting the most benefit.

I definitely agree to that. So there's a saying now that's going around like AI won't replace you, but a person who uses AI will, something like that.

And to that point, I've been using Copilot since pretty much its release.

I was really early on. And when I was using Copilot, I loved it.

I have not that great of memory for syntax. I always used to like look up syntax.

I love the result of programming. I love like bringing shipping products to production, but the act of coding itself, I never loved it, given my inability to remember syntax well.

And so Copilot is like a perfect addition to my tool set.

And now there's a bunch of other stuff. And we know so many people found their passion for building while using Chachapiti on the side, because they now have like a senior coach, whatever, sitting with them and teaching them to do some stuff.

However, the human element of this does not disappear. So I've used all of the agent tools, AutoGPT and BabyAGI and all of those things to try to call like a simple Chrome extension, for example.

And a lot of them just get lost. And the demo effect there is incredible.

Like you see all these videos on Twitter. Oh, I just talked to it and blah, blah, blah.

And it runs and it built me something that's worthwhile working.

However, Danny, you're a product manager as well.

I build products. And that's not how products are built. It's cool to build a demo.

Cool. But on the way from there to deployment, there's just so much that happens, not even including just human feedback.

You need to like have users use you and then have them give you some feedback.

And you're like, what does their feedback actually mean?

Does it mean that, you know, sometimes your users give you a feature suggestion?

That's not what you want. You want them to understand their problem.

AI is not there. There's not a lot of understanding of how humans will actually use this.

And the additional piece is complexity. The more you kind of try to solve with AI in just this one chunk, the more it will get lost and the complexities will just spread out.

And so we've known it's definitely the same things.

Specific, well-defined, fine-tuned tasks work well. And then you can kind of extrapolate and like build multiple things that do specific tasks well.

And then you maybe coordinate or, you know, you bring a super agent on top, but definitely the replacement is not inside as far as I saw.

And that's maybe a message to many folks that ask me about, hey, I'm starting coding.

I want to go to this bootcamp or that bootcamp.

Should I even start or will AI just replace this completely? I don't think so.

I think it just will give all of us like better superpowers. Definitely helps me as a solopreneur to build, you know, the back end, the front end, the infrastructure, the customer support.

There's definitely the ability now to do all this, but I don't see AI doing all of this on its own very soon.

You know, in any role, the more of the mundane, like day-to-day, like grunt work tasks that you can streamline, the more time the person has to be creative and thoughtful and take a step back and see what's possible.

And I think that something really exciting for engineers is that the more that AI can write code, write functions, like just spit out the code itself, like that's the grunt work of engineering, but not actually what's difficult or interesting about engineering.

The more the engineer has time to sit back and think about how to construct this in an elegant system that's elegant as a code base, that's maintainable, but also like fast and performant and simple and just awesome for the user.

I think it's going to change the role of engineer from someone who writes code to someone who's more of an architect.

I think what that means is that people who will be more interested in becoming engineers will be more product focused, they'll be more creative, they'll be interested in user experience, these sort of holistic builders.

And I think it's gonna be really interesting to see the role of engineering change, but I don't think it's going away.

It's like the assistive versus generative. AI is great to help reduce the amount of time it takes to search for a question.

It's using copilot to be like, okay, what is the syntax of this?

I don't remember. Just give me that so that I can move on with the creative, the problem solving, the reasoning pieces of that.

AI doesn't resolve those elements. So I was playing with some of the AI art tools a few weeks ago with my son and giving them prompts and then seeing what came back.

And I'm like, you know what, I don't think artists really need to worry about AI taking over painting and photography because the stuff coming back, while interesting and cool, there's some really weird stuff these things can generate.

I do think there's an important message for people who are new to their career here, which is, you know, when the earlier you are in your career, the more junior you are in any role, the more focused you are on just getting something that sort of works versus when you're more senior, you really care about the also the quality of that and taking a more thoughtful approach.

I think when you're more junior in your career, especially as an engineer, it's very tempting to have copilot sort of generate the thing for you.

But the reality is, as soon as you submit the PR, the rest of the engineering team can see, like, this doesn't thoughtfully integrate with the rest of the code base.

It just looks like copy paste from Stack Overflow.

It's like, clearly not designed with the rest of the code base in mind.

And I think that it's very tempting as a new junior person on the team to take these shortcuts.

But I think that actually probably good career advice for junior people out there on both engineering teams and, you know, marketing product teams anywhere where they're generating a lot of copy using these new tools is to first take a step back and just see, like, how does this fit cohesively with our whole strategy, the whole, you know, product or context in which it fits.

I've definitely met some AI generated bugs in production that took me some amount of cognitive kind of load.

Did I write this? What? I'm not sure that this bug is of my origin.

And solopreneur, I know, like, all of my code, like, basically, I wrote all of it.

It's like, sometimes, like, okay, I still think it's a net positive overall, because, like, it allowed me to run so much more.

For the pull request that you mentioned, one cool thing that's still in beta, but is coming to everyone is a PR bot from GitHub.

And you just tag it in your pull request. You don't even have to fill out what you did.

So it looks at the changes in your code and fills that whole thing out and explains in a really nice, simple way what you actually did.

And I know for a fact, like, for me, even though I work with myself, there's no pull request.

I don't review my own code. Essentially, I just, like, you know, merge.

I found this good just to keep this in history. And I just imagine, you know, looking back, the teams that I ran, how much time just that piece would take.

Not to mention that there's, like, another PR bot that reviews code and maybe finds bug or, like, maybe finds that it's not within the same standards that the other folks are writing.

But just for that specific task, like we talked, it's just incredible.

Because think about all the changes you did. Oh, you changed this file, that file.

You don't even, like, it's kind of, you're putting your cognitive load on other folks who do review.

And it's hard for you to, like, maybe sometimes focus on everything that you did.

And in that sense, it's a really great and helpful tool.

That's so awesome. And one of the cool effects of that is, you know, like, companies today can hire all over the world, which is so darn cool that we can all work together no matter where we're based.

But language really is a barrier.

And for an engineer to have access to that, so no matter where they are sitting in the world, they can communicate in the language of the company really well, that showcases their work in a way that they deserve is just, that's freaking awesome.

Yeah, I, the reason I started Torgum is removing, like, the overall vision is AI is going to help us solve this problem.

I speak three languages interchangeably without speaking Ukrainian where I was born.

And language barriers is definitely a thing, a thing in my family, where my kids cannot communicate with one of their grandpas because they don't have a mutual kind of lexicon.

And it's been a thing all my life. I'm like, three times a transplant to different countries.

And AI is solving this really fast. Just to give you guys some context from the stuff that I'm excited about.

When I met Whisper, when I built Torgum in the beginning, it was around maybe October of last year.

Whisper understands around 9900 languages and can translate to English.

Since then,, the company in Facebook released an upgrade that understands 1000 languages.

So 11x from what Whisper did eight months ago.

And not only understands, it can also like generate sound.

And so this is just one example of the tools that we currently look at for AI to solve some big problems, big problems in society.

Literally, there's the cultural differences, obviously, but also just like people unable to speak or just communicate.

They seem to get solved faster and faster just conceptually.

And it's up to us to then build tools to actually kind of deliver them to people because the thing that Facebook released, it's a Python object.

Somebody needs to put this in production.

Somebody needs to put this on some GPUs. But the ability to do those things now exists.

And I agree with Danny, this is incredible.

In the world that's more global now, the more and more people getting access to these tools, I think generally just like lift many, many people up.

Now, again, things are growing so, so quickly. But prior to us going live, Danny actually asked Alex about like hot takes.

So I'm going to steal that question and ask.

It was a really fun conversation we were having. Can we create it now? So Alex, what are your hot takes around AI?

I don't know if it's such a hot take. However, I think Danny, you definitely mentioned, you touched on this.

AI as currently, so it splits, right?

We have the kind of the large language models that do some stuff.

We have the just the general transformer architecture that solves many tasks that weren't solved before, like image, vision, obviously language I've talked about.

And we have the image diffusion stuff that generates like nice imagery.

And then that's a whole explosion of, you know, Cambrian explosion of creativity.

AI is not a panacea. So we've talked about this as well. Like it doesn't definitely, you know, remove the need to hire an engineer that can talk to customers and implement those processes.

And for companies, for enterprises, AI is definitely not a panacea.

There's the big problem of everybody's pushing, there's a lot of hype.

It sounds like when you talk to your GPT, it's incredible. So would it extend, you know, would it generalize to all processes in your company?

No. AI, as it currently stands, large language models have a beautiful first mile, great demos, quick to production effect.

They do incredible things, they generalize incredibly well.

But then folks start noticing, oh, well, there's some problems there.

There's the hallucination problem or conjecture. And it's only a problem if you don't know how they work inside, but the way they work is just predict next tokens for a bunch of the stuff.

And so you have to work that much harder to then control the output with some factualities.

But that's definitely a big problem for enterprises, because you don't want DLM to recommend, let's say, build a recommender system.

You don't want DLM to recommend products of a competitor, for example. So there's the data privacy issue.

Some of these bigger models that are the best at everything, like GPT-4 and Cloud, you have to send your stuff to somebody else.

And for privacy conscious enterprises, for, you know, doctors and patients, etc.

There's some limiting factors there, just from that perspective.

So AI is not a panacea, but there's definitely more and more that you can solve it with AI.

Don't know if it's a hot take, but that's definitely where we're currently at.

I'm interested to see, to hear Danny's hot takes.

I think that with every new technological advancement, individuals become more and more and more powerful, and they're able to do more and more good if they want to, and they're also able to do more and more harm if they want to.

So with the rise of the Internet, people built tools that connected families that lived all across the world from each other, could suddenly talk, which is beautiful.

And people also built tools that would spam out mass DDoS and phishing attacks, which is just evil.

And AI is a jump in technological possibility, and it is a jump in how much good an individual can do in the world, and it's also a jump in the amount of harm an individual can do.

And there's a lot of conversation now about what should the education system be in a world of AI?

What should we teach our young generations? One thing that I'm not hearing there, it's a lot of focus on tactical skills, but there's not, I think, enough conversation there about teaching morality, and what it means to live a good life, and what it means to be a virtuous person, and why you should do good in the world, and why that will lead you to be happier.

Like, you won't be happier by doing bad and getting rich, you'll be happier by making an impact and taking responsibility and trying really hard at something to make it succeed.

And I think that this is one of the most important things happening right now in society.

I did not grow up religious, so I may be misunderstanding here, but if I understand correctly, the rise of AI is happening at the same time as the decline of organized religion.

And organized religion used to be the moral education of society, you know, thou shall not steal.

And so I think there's a big question of what will replace that and what will teach us all to be good and to use what's now available to us in a way that benefits others and not harm them.

Yeah, I always look at it like, just because you can do something doesn't mean you should do something, that there's so many things like, oh, it'd be great to build this technology, but the amount of technology that was designed for good that is now used for somewhat nefarious purposes is a little bit concerning.

I was in a bookstore the other day and they had a book called How to Be Perfect.

And I bought it. I haven't read it yet, so I can't tell you if it's any good.

But the whole concept behind it is about like that morality and ethics and it asks, it says like, okay, from like simple questions of like, should I punch somebody in the face for no reason to like bigger ethical questions.

Like each chapter talks about like this ethical moral concept. So I bought it.

We'll talk about later whether or not it was a good book or not, but it's those pieces are such an important part as we build things to really understand where we're going.

This brings me to a different hot date that I would love you guys thoughts on.

I strongly believe that deep fakes, as they are now, it's a big, it's very exciting for mass media to talk about, oh my god, it's going to bring so much deep fakes.

I strongly believe the deep fakes, especially this cycle, including politics is a net positive to society.

I believe that as fake news war before where people stop believing everything that's written, because for ages people read something and they didn't write something.

So written word was kind of a authority essentially in their heads.

I think the same thing happened after like the fake news cycle that we all saw that people are like, okay, I'm not sure that what I read is correct.

Even though I read this online, I think the same thing should happen to society and deep fakes, especially now we'll just switch in people's heads and upgrade society because everything now can be fake.

Our voice can be fake.

Our presence very, very soon is going to be very realistic. And the faster general society gets to that point where we stop believing just because we saw something, I think overall it's going to be good.

It's going to be a little problematic in between.

That's my current hot take. I mean, there's a hundred million years of evolution that got us to this place.

And what we, what our neural nets are trained on is to see the world and to make a map of it and then to try to navigate within that.

I don't know that, I don't think deep fakes will be a net positive.

I think that it breaks what it means to be a human. I think it breaks our natural experience of the world.

And I don't know that we have a good new mental model to replace that.

I think maybe some people will walk away with what you're saying of like, oh, I should be skeptical and try to do my own research.

And now I understand that everything can be faked.

But I think a lot of people will struggle and just, it will feel like the world is just very uncanny.

Yeah, I'll answer quickly and then get to our last question because we have a little less than four minutes left.

I would agree with Danny and my perspective is like, I have a 14 year old son and he'll come to me all the time.

He's like, I saw this video, I saw this thing.

I was like, you cannot believe everything that you see and read. But kids are so impressionable and not all of them are getting like the training and education that this could be fake, that like I can write anything I want and post it on the Internet.

It doesn't make it real and true. And so having those filters, having the ability to identify what's real and what's not is a scary thing for parents out there.

Anyways, we have three minutes left and I want to give you both a chance.

I both appreciate you being here. I've really enjoyed this conversation.

I wish I scheduled this for more than 30 minutes. But I want to give you both a chance to do some what I call shameless self-promotion.

What's one thing that you want the audience to know about your products, you, something else?

You got about a minute each.

Go ahead then. Oh, okay. I'll start. Right. So two things real quick. If you have a video that in another language that you want to understand or honestly just put English subtitles real quick and you don't have to deal with downloading some stuff.

Just drag a video. It works real quick and I'm very happy with the product.

I use it sometimes for myself to just share links with my family, just like in translated mode.

And the additional thing is my current thing is I stay up to date so you don't have to.

That's my offering for companies.

And I try to explain very technical, deep stuff very simply. I really enjoy this, creating content around this.

And if you want to know more about AI and the latest things, look me up.

Great. Dani? We opened up Jam to the world one year and one month ago and we're about to cross 30,000 users.

And it's just been this crazy adventure.

Every single day we are listening to user feedback. What that means is we share clips every single day from recent user interviews with the whole team at our daily standup we all discuss.

And every single day throughout the day as we're receiving feedback over email and other chats, we're sharing in a dedicated Slack channel and the whole team is there reading, discussing how we can make the product better for users.

We really want to solve the problem of archaic, slow miscommunication around bugs.

And so if you have any problems with communicating with engineers about bugs and fixes, I would love for you to go try out Jam, jam .dev, and let me know how we can make it even better for you.

My email is dani.jam .dev and I really just want to hear everyone's feedback.

Great. Like I said, thank you both for taking the time out of your schedules to have this conversation.

It was a ton of fun.

We'll be available for replays. So if you missed it in the beginning, you can come back later and watch it on demand.

Again, I can't say thank you enough.

So thank you both. Have a great day. And for those joining the rest of the day, there is another talk about what history can teach us about AI at the top of the hour.

So if you're watching this live on January 9th at the top of the hour, 11 a.m.

Pacific time, there will be a talk on history lessons we've learned from AI.

So join into that talk as well. Thank you all very much. Have a great day.

Thanks, Dan. Thanks, Alex.

Thumbnail image for video "Developer Speaker Series"

Developer Speaker Series
Tune in to hear talks and presentations on the future of web development, from some of the industry's best speakers and educators.
Watch more episodes