Originally aired on June 20 @ 12:00 PM - 12:30 PM EDT
In this episode, host João Tomé is joined by Kenton Varda, Principal Engineer at Cloudflare, for a wide-ranging conversation about AI, code, and the evolution of Internet development.
Kenton shares how a real-world project shifted his view from AI skepticism to seeing the promise of AI-assisted coding, while emphasizing the need for strong human review, especially for security. The episode also dives into the architecture of Cloudflare Workers and its first months, Durable Objects, and the vision of the Internet as one programmable computer: “the network is the computer”.
Looking ahead, Kenton predicts a new era of developers powered by AI assistants — building more custom apps than ever — and explains why Cloudflare Workers is built to support that future.
Mentioned blog posts:
I think the big thing that we're going to see changing in the next few years is there's going to be...
people worry about AI meaning there's fewer developers. I actually think it's going to be that there's way more developers, you know, 10 times as many developers in the next few years as there have been in the past because AI assistance will allow so many more people to become developers.
And what that means is that we will have people who are writing niche apps for niche use cases, often for themselves, using AI as an assistant, maybe in some cases people who don't even know how to code, but if they're building apps, I still count them as developers.
I'm not going to keep that. Hello everyone and welcome to This Week in NET.
It's the June 20th, 2025 edition. The summer is almost here, quite hot in Portugal these days, and this show is quite special if you like AI, building stuff with AI, code creating with AI, and if you like to hear someone that is really into the weeds in this topic.
So we're going to talk about all of these topics with Kenton Varda.
Kenton, for those who don't know, is the creator of Workers, our developers platform, so he has a lot to say on this topic.
There's some advices for those who are starting in this area, so there's a lot to unpack here.
I'm your host, Rondo Mapes, in Lisbon, Portugal, and on the Cloudflare blog this week, we blogged the largest DDoS attack ever recorded.
A new record has been set and DDoS attacks are still a thing. In this case, it was a massive 7.3 terabits per second attack in May 2025, so we published the blog post this week about defending the Internet.
Also, the national cybersecurity organization of the US, NIST, just published a new guidance on Zero Trust and we broke it down in plain language in the blog post, everything you need to know about NIST, SP1835.
Also in our blog, Log Explorer is now generally available, so that brings observability and forensics directly into your dashboard, so something that you can explore to get better observability in your data.
Also in the blog, open source update, easily connect any React app to an MCP server with just three lines of code using the new use MCP library.
For those who don't know, MCP is the protocol that is enabling agents, AI agents, so it's really hyped.
We discussed this protocol many times in the in this show before, so there's episodes there and also we speak about this specifically with Kenton Varda in this episode.
And finally, this was already last week, we published last week a detailed postmortem on the June the 12th outage affecting workers, KV, access and the dashboard.
And without further ado, here's my conversation with Kenton Varda.
Hello Kenton, welcome to This Week in Ed.
How are you? Great, thanks for having me. For those who don't know, where are you based?
I am in Austin, Texas. Sunny, I bet.
Been a lot of rain lately, but sunny right now. Really warm in Lisbon here in these days.
Oh well, summer is coming. Before we go into workers, into all of the amazing stuff you already did at Cloudflare, why not jump into the bandwagon of AI?
Can you explain to me what made you going from a non-believer in AI for developers, specifically to a believer?
Potentially not a full believer, but a believer nonetheless.
Well, yeah, so as of like the beginning of this year, I was still a big AI skeptic.
I thought that these were glorified Markov chain generators or just cut and pasting stuff they'd seen in the past to put together answers to things.
And I think part of that is I didn't really want to believe that this technology was advancing because I sort of saw this future that I didn't like, where I spent all my time reviewing code that's generated by AI rather than writing my own code.
And that just felt like something I didn't want. Reviewing human code is kind of a slog for me.
And the idea that now that's all I do, except it's generated by AI, so it's worse, scared me.
So I didn't want to believe. But then this year, early this year, we had this project we needed to do where we needed as part of our MCP framework, we needed an OAuth implementation that works on provider side implementation of the OAuth protocol.
So I said, okay, well, we need to write this library, but we also have very little time to do this because the MCP is moving fast and we have to be there.
We didn't have anyone to work on it.
I didn't have time to work on it. And I thought this would take a couple of months.
And I said, okay, tell you what, let's try it. Let's just see what happens if I ask an AI to write this code for fun.
I'm sure this will be terrible. And so I did it and I prompted, I decided to use Clod.
And at first I was just prompting in the Clod UI, but then I switched to Clod code pretty quickly and asked it to write this.
I tried some other models too, but Clod produced the best output at the time. And it actually produced pretty decent code, just like I gave it my spec, which I had written, which I intended for a human to look at and to build.
But I just lopped that into Clod and then it output like 800 lines of code that were basically what I asked for.
I was like, huh. And then there were some things that could be improved, things that weren't really covered in detail in the spec.
And so I reviewed every line of code and I said, okay, make this better, make that better.
Things like, you're doing two lookups, two workers KB lookups here, but if we de -normalize this key into that key, then you'd be able to do just one lookup.
But I just explained it like that, like de -normalize this.
And it said, oh, that's a great idea.
And then it did it and it did it, you know, basically did what I asked for.
So there were some, some pretty dumb bugs at times, but mostly it was just working and it got to an end state that was working, you know, ready code much faster than I could.
Now this was sort of an ideal use case because it was implementing, you know, it was a Greenfield project implementing a well -known spec on a well-known platform in a well-known language.
And like my particular API design was unique, but like all of the, everything else it had to know were things that were in its training set.
And, you know, so it worked very well for that.
I would say a lot of things I've tried to use the AI for since have not worked quite as well, like trying to make changes to the worker's runtime, which is a big complicated C++ code base.
It, it does better than I, you know, than 2020 for me would have ever expected, but it doesn't usually save me time there.
Maybe because, you know, I know all the code already, so I know exactly what to edit myself.
And so having the AI try to do it doesn't, you know, I end up spending more time telling it what it's doing wrong than it would take me to just write the code.
But for this particular project, it worked great.
And, and it proved to me that I was, I was wrong because like this code, like it wasn't just copy pasting things it's seen on the Internet.
Like, yes, there are other OAuth implementations out there, but I had a unique API design and, and some unique design details that I was prompting it to do that it couldn't have just gotten that from GitHub.
It really like, it understood like the, this word understand, like could mean different things to different people.
But in my mind, it understood the code that it was writing and what I was asking it to do and it produced output.
And so we published the code.
It changed my perspective. We published the code to GitHub. You know, this is an open source library intended for anyone to use on workers.
So we published the code.
I had a note in the readme saying, yeah, this was produced by, by Cloud Code and, and, you know, a little bit about that story and all the, the commit history contains all of my prompts just in case anyone cares to look at them.
Like, I don't know why you would really care to look at them because like, I didn't know what I was doing.
I didn't, I wasn't, I just talked to it like, like it was a human.
I didn't use any prompting techniques. But then about two months after we published it, someone found this note and, and posted the GitHub repo to Hacker News with a title like Cloudflare uses AI to write OAuth library, publishes all the prompts.
And all of a sudden, everyone was very interested in this. Like, no one really cared about that until that point.
And then all of a sudden, every day for a week, someone was writing their own blog posts with their own opinion.
With some misconceptions there as well, right? Well, yeah. So people seem to, well, so first of all, a lot of people seem to think that I did this in order to prove to other people or like to advocate AI usage to people.
And I didn't, I use it to test out AI for myself and I found it very enlightening.
And you know, I would encourage people to try out the tools.
Like I think it's, it's irresponsible at this point for any developer to not have tried the tools and to not be aware of what they can do.
But that doesn't mean you should use them for everything. Like most of the code I write, I'm not using AI most of the time because I know that a lot of the stuff I'm doing, it wouldn't do a good job of.
I do like to use it. I mostly like to use it for generating tests because generating a good long test suite is, can be really boring and lots of boilerplate.
And it's really good at that. But generally for actually implementing changes, I'm still not using it for most things.
So I'm not, I'm not a maximalist. Like I'm just telling people to, so like be informed before you make a decision.
And the code you did, and you mentioned this, was all reviewed.
You reviewed the whole thing, right? How important is that revision process?
Yeah. Yes. Well, so this is code that's, well, I reviewed every line of code.
I checked it for security bugs. There was actually, there was a bug that, that, that I missed, which was really annoying because it was on my list of things to check for.
And I don't know what happened, but like, I think I, well, anyway, I missed something, but that, that's my fault.
Like if a human had written that code, it would have still been my responsibility to review it and, and catch that bug.
And I've seen humans do worse. So a lot of people are, of course, pointing out that part and saying, ah, I told you so.
So, well, okay, fine. Yeah.
It's not perfect. Take from that what you will. If you decide from that, that you don't want to use AI to write code, great.
That's fine decision. I'm just writing information.
Yeah. For production code today especially systems code, you can't just have the AI writing the code and play it unreviewed because it'll have security bugs.
It'll, there've been many examples of people vibe coding production software to playing it.
And then it turns out that their API keys are just embedded in the code, which is visible to clients.
And that's, that's bad.
It's a big And I think, you know, it may be at some point that the models improve and AI doesn't make these mistakes anymore.
I don't know how long it will take to get there.
Like probably at some point that will be the case. Will it be in a month?
Will it be in a year? Will it be in 10 years? I have no idea. But I think until we get there, there really is, we need some sort of platform that can enforce guardrails in such a way that you can say, whatever you, whatever code you write and deploy to this environment, it can't do anything bad.
And I think that's, that's the sort of thing that most people say, Oh, that sounds impossible.
But I have some experience building such a platform before it's called Sandstorm.
It was my startup before Cloudflare.
And I guess it was a little bit early then, but there are things you can do especially for applications that, that aren't public facing, but you know, internal facing apps that do need to have access to, or, you know, personal apps that need to have access to your personal data that you wouldn't want to be published on the Internet.
But you can build a platform where it, it's impossible for those apps to leak anything.
And I think, I think workers would be actually a pretty great basis for such, such a thing because of our sandboxing model.
So that's something I'm pretty excited about right now. So in a way, just to sum up what you said there, the, the way workers, our developer platform is built and you build it in a sense has that safeguard, the potential safeguard that could be really helpful in this AI age with, with these models in place, in a sense, it could be like a real game changer for those that are trusting in AI generated code.
Yeah. And there's a lot of reasons for this, but just for a very basic one, like almost every server platform in existence, the server is assumed to have full access to the Internet.
Like it can make requests out to the Internet if it chooses to do so.
But that means that insecure code could leak things.
And you just sort of assume that developers and servers are, are not going to write code that just, you know, sends the database out to the Internet.
In workers though, everything runs in a sandbox to start with.
And then you have this global function called fetch, which makes HTTP requests to the Internet, but we can block that.
With, with workers for platforms, for instance, it's possible to have a worker that cannot talk to the Internet.
Now it's fully sandbox and you can sort of control what it has access to.
And then the, the, the second property that's important here is, well, then how do you give it access to specific things?
Normally, most platforms, like you start out with access to the Internet or to the network, and then you get API keys used to authenticate to specific resources, but you still have to contact those resources using the network that you by default had access to.
In workers, we have this concept of bindings where your, your environment variables, it's like you have environment variables that are not just strings, but are live objects that actually talk to a particular resource.
And you never provide an API key.
You just, by configuring that binding, now your worker has access.
And so you're not using network access when you talk to a worker's KB namespace, you are using this binding.
And so if network access were disabled, you can still have access to your bindings.
And so then your bindings become the set of things you have access to.
It's an example of a design pattern called capability-based security, which shows up in a lot of places.
Like not a lot of people know a lot about capability -based security, but if you look deep into most secure platforms, you see it show up.
It's in Android, it's in Chrome, it's in, in some ways, file descriptors in Unix.
And, and we're really, like, we're designing around that principle.
We always have. Makes sense. I'm curious on the challenges of the AI you use.
You already spoke a bit about that, but for those that are junior or just starting developers, should they, what's the level of trust they should have in these models?
How should they protect themselves specifically?
Well, so they are wrong a lot of the time, but they can be useful for learning.
Basically, you have to check it. Like one of the most useful places for me to use AI is actually when I'm jumping into someone else's code base, I don't know my way around.
I will ask the AI to ask questions of it or ask it to write some code.
And if I ask it to write code, it'll probably be wrong the first time, but it helps me figure out my way around the code base.
But yeah, do not, do not trust what it says just because it gave an answer that sounds confident.
Check the answer, basically.
Confirm. In the end, the human in the loop is always the responsible part, right?
Is always, Hey, you shipped this, so you should own this. Yeah.
I mean, our rules at Cloudflare are, if you're using AI to generate code, you are still responsible for it as if you had written it yourself.
Every engineer has to fully understand the code before they put up a pull request.
Yeah. I think Matthew actually explained this a few times.
Like there's no code that is not reviewed by humans that we typically ship.
Yeah. Yeah. I mean, it's a tool to help you author, but you have to understand what you're submitting.
You did a tweet the other day, actually quite interesting, on maybe there will be a security experts who hate AI will go review LLM written code for free as a joke, but it's, it's actually interesting.
It's not farfetched. I was joking because after my project got so much attention, there, there were some security experts who were skeptical of AI who went and looked for bugs and reported some bugs, but they were all, it turned out they were all minor bugs.
Like I haven't seen any actual security vulnerability reported, despite apparently a lot of people looking.
So I'm, I'm happy to get all the review.
Makes sense. Specifically also even on the AI base, for those who are trying out, what would be your advice in terms of, okay, you can use AI and start here.
What would be the starting point that you would say that right now seems good enough?
We have MCP servers now from, from Antropic, but also now available on OpenAI and others.
What would be the start? Well, so if you're a software engineer and you want to learn about these tools, then I would say download one of the agentic editors.
This, this word agentic is important. There had been lots of AI coding tools in the past that were, they either did autocomplete or like gave you a chat window where you could chat with it, but it couldn't really go and, and crawl around your code base on its own.
The new breed of agentic editors now can actually, you, you just run them in your code base and you ask them questions or ask them to write code and they will dig around, you know, search through the code base to learn about it.
And then they will propose just wherever they're needed without any hints.
So Cloud Code is an example of that. OpenAI Codex is one.
There's also some IDEs like Windsurf and Cursor now have agentic modes.
I think GitHub Copilot has an agentic mode now. Choose one of these and just like play around with it.
There's no learning curve. You just open it up and you talk to it like you talk to a human and you know, ask it to do something and it goes and does it.
And so if you only have 15 minutes, that's fine. Just use that, install the thing, see what happens.
Ask it to do, to fix a bug that you know you need to fix.
It might not work, but you'll learn something about what works and what doesn't.
And that's better than not knowing. Makes sense. I'm also curious in terms of what you've seen out there that is being done using AI, using work, using workers in this area.
What surprised you the most? Did you see like projects that really surprised you in terms of what was achieved?
Honestly, I have not looked very much at what other people are building with AI other than, of course, seeing people on social media who are like, oh, look, I've coded this game.
Isn't this cool?
Which is part of what got me to say, hmm, people are asking AI to, well, but the fact that they apparently had a complete working code just from prompting an AI and without writing any code themselves, like when that first started happening, I was very surprised because I didn't think they were anywhere near being able to do that.
So that's what kind of got me to say, okay, well, maybe I need to learn more about this.
But yeah, I don't know. There's a lot out there. Some mix, mix the AI, mix new ideas.
Some it's also a little bit on the news because it was AI-based, but if it was not AI-based, maybe it was not hyped or on the news.
There's also that part as well, right?
The hype. Yeah. I mean, and again, like I'm not selling an AI code editor.
I don't actually care if you use AI to write code. I am just suggesting people do need to understand what's there because it's changing quickly.
Makes sense. Why not go a bit to the history? You already spoke about the Sandstorm, your startup that was acquired by Cloudflare.
It was around 10 years, almost 10 years, right?
Almost 10 years. 2017, so a bit over eight years. Can you explain to us, for those who know workers now or even don't know workers, our developers platform and want to understand a bit of how it came to be, can you explain to us a bit of that process in 2017?
Yeah. So when we talked to Cloudflare early on about joining, John Graham-Cumming, the CTO, said to me, like, we know that we want our customers to be able to deploy code to our edge, but we don't really have any idea how that should work.
Do you have any ideas? I had a lot of ideas and that's what got me excited about coming to Cloudflare.
So Sandstorm, for instance, had been a container platform that was built for particularly lightweight containers.
So initially I was thinking of lightweight containers are the way to do this.
But after joining and talking to people and researching the problem a bit more, I actually decided that even lightweight containers are way too heavy for what we needed because we wanted every application to be able to run in every one of our, at the time it was like a hundred locations, now there's many hundreds.
But that means that we, you know, some of these locations don't have tons of servers in them and we might have to run a thousand applications on each machine in order for this to work.
So each one has to have an extremely small memory footprint and a very fast startup time so we can load them on demand rather than keeping them running all the time.
Because most platforms, including serverless platforms, are based around the idea that your application server runs in like one place or in a few places around the world and then all the traffic from around the world gets funneled to those few places so that you can have one server that's handling lots of traffic and then that makes economies of scale kick in and it makes it efficient.
But we really wanted to be able to start an application to run to handle just one request and have that be efficient so we could start it anywhere, start it as close as possible to the user.
So what we settled on was using JavaScript runtimes in a single process.
So using the V8 runtime, which is the JavaScript engine from Google Chrome, which is all open source, taking that and running lots of instances of it in a single process where each instance is called an isolate.
And an isolate can start up in a few milliseconds and, depending on the size of the application, can only take a couple of megabytes of memory.
And so you can easily have thousands of those running at once. So I built a platform around that.
We had a beta out in about six months and then went GA coincidentally exactly a year after I joined.
And it's been growing ever since. True.
Actually, to show you a bit of history, here's the blog post of the beta. It was during birthday week, September 27, 2017 actually.
And it was right from the get-go really clear of what was here to achieve, in a sense.
Edge computing for everyone and also the update that you mentioned in March 2018.
That is when it was fully available to everyone.
Also with a playground already. But it was your first blog post at Cloudflare.
It was an important blog post, given that it was the announcement of workers, really.
How does it feel so many years, eight years now? How does it feel to see the evolution of the platform?
Generally, pretty good. I mean, we've been growing exponentially ever since.
We've evolved from early on, it was seen as this is a way you can configure Cloudflare, configure your CDN to do bespoke things.
But these days, it's much more of an application platform, especially with the introduction of durable objects and other features.
There's definitely a lot that, if I could go back and start it over from the beginning, I think we could have gotten here faster.
A lot of trial and error. Early on, we were big on basing it around service workers, the service workers API.
That turned out not to actually be a great fit in the long term.
So we eventually abandoned that in 2020 or started moving away from it.
But we still have to support it forever, because we have a strong guarantee that we will never break a worker that is running in production.
We have people who last deployed their worker.
I just saw a tweet from someone the other day, he said, I deployed a bunch of workers seven years ago, and they're still running.
And I haven't touched them since, and they just still work.
Yep, that's important. We will never force you to touch them.
I like that. I like that a lot, especially because I was a journalist in another life.
And sometimes you see like projects going down because they stop paying something or the website's going down or something stops working.
Making things work for some time, even without touching is music to my ears.
Yeah, I mean, I've had the experience of like, you know, I've tried to keep some sandstorm services running all this time.
And it gets harder and harder over time if you don't have a team upkeeping stuff to keep a server running on a traditional platform.
But if it had all been built on workers, if workers had existed, and it had all been built on workers, like I wouldn't have to do anything.
So I felt that pain.
Like a lot of platform developers do not understand this concept of sometimes you have things that are deployed that don't have a full time engineering team working on them.
Sometimes there's one person who's gone and deployed a bunch of little things.
Or sometimes it's like in maintenance mode, and everyone's working on something else.
And like making people constantly update is not okay.
If there was someone here talking about like a AI positive person, that person would say maybe an agent can start doing those things in the future.
Not sure if it's maybe.
Yeah, well, we'll see. I don't know if I would trust an AI to to update my, you know, to apply my security patch.
It's fine security patches on its own is easy.
But then when you have to like, oh, this, there's a I have to update this dependency.
And the API changed. And now I have to rewrite a bunch of code around the new API.
Yeah, I could help there. But I think you still need a human overseeing it makes sense.
You were mentioning john Graham coming that was our CDO.
And now he's on in the board of directors. I was speaking with him the other day about call flare his path 13 years at welfare.
And one of the things that surprised me was how this strategy from the get go was in his mind, correct.
The strategy was already there.
Not everything was built still in the beginning, it had to be built, it had to be to grow.
But the strategy from the get go was correct and continues to give fruits to be what enables new stuff that appears new products.
What do you think is specifically there even on the workers path in terms of strategy in terms of growth now over 2 million developers use workers, if I'm not mistaken, if not more?
Yeah, I would say our vision for the platform has been incredibly consistent from the beginning, the details change all the time.
And that's kind of what makes things work at Cloudflare is we're not set on a particular plan, we are set on a particular vision.
But we can quickly change plans when we see something different arise that calls for it.
You know, when when MCP, the MCP protocol was announced last year by anthropic, which is a protocol for basically giving AI tools that it can use or creating a server that gives tools to an AI to so you can connect connector AI things and have them help you.
They announced that protocol. And we very quickly saw like this is something that workers would be great for hosting these servers.
But we need a framework for to help people do that.
And so it's sort of like, okay, de prioritize things we're working on prioritize building that framework.
And then we had the framework released the day before the spec was actually updated to support remote MCP at all.
So on day one of remote MCP, we're like, you can build it on us. And that's been that's been super successful.
But that's kind of how we do things. Everything's like vision is set plans aren't set.
Exactly. And you mentioned that new change.
And because the vision was correct, and platform was already there, we can change and adapt and just enable those areas to developers.
So it uses the same vision, and what was done before in a new way, a new path, specifically, which is interesting.
You even mentioned the sandboxing specifically in in what workers has become in terms of durable objects, workers, workers, runtime, there's containers, kind of the workers.
Exactly. There's there's so much there, specifically in terms of those concepts, those products, what do you think made like was surprising for you in terms of how useful it became?
And what are those that you think could also be helpful in the AI future was surprising.
It's interesting that I always continue to hear developers thinking, oh, durable objects.
Now I understand how this works. And it's like amazing. It's interesting to see that at the risk of sounding arrogant or something durable objects were have worked out exactly how I expected them to work out.
Well, if anything, I guess they the thing that has been a challenge is is communicating what they do.
But let's go there.
For those who don't know, what's durable objects? And what does it do? So a durable object is a special kind of worker that that runs JavaScript code.
But it has where most instances of your worker don't have a name, they just receive HTTP requests that are randomly routed to them.
A durable object has a name. And that means that other workers elsewhere in the world can route requests to that specific instance of that object.
And so if you need to coordinate something, like you're doing a real time collaborative document editor, and you need everyone's keystrokes to be broadcast to everyone else, you need to coordinate through one point, you send everyone's updates to that one object, it broadcasts them back out to all the others sports web web sockets real well for these kinds of use cases.
But the point is, it's a it's a distributed systems primitive that allows for coordination.
Now, it also has storage attached.
Each durable object has its own private SQLite database. Sometimes people get confused, they think that the point of durable objects is storage.
Some people don't even think that a durable object has anything to do with running code.
They think it's just some sort of database thing. And probably the name was poorly chosen and adds to that confusion.
The storage is somewhat beside the point.
I mean, it is storage on the edge, it is the main like the best way to get storage on the edge, in my opinion.
But you could also use a durable object in front of some external storage and just use it as like a caching layer or a coordination layer.
Yeah, durable objects were funny. So I originally called them object workers.
And it's an idea that I had pretty early on, like, probably in 2018, shortly after launching workers.
And it took a while, actually, even internally for people to really get what I was talking about.
Like it was based on a similar idea in Sandstorm.
So it like made sense to me. But it was it was tough explaining, like, why this is so important.
And what ended up happening is, we kept having meetings about other things we were building, where there was some challenge, like something that was hard to build on the edge.
And I said, and every time it would end up being the case that I could say, well, you know, if we had object workers, this would be easy.
And to the point where it became this meme, like in every meeting, I'd say, you know, what would make this easy?
Everyone's starting. Okay, let's build that.
So then we did build it. And now like lots of our own products are built on durable objects like R2 and queues and workflows and all kinds of like cache purge now uses durable objects.
It's everywhere. One of the things that surprised me, and I'm not a developer, is the to see the feedback that we get, sometimes internally, as you were saying, people just using it to enable what they're doing, but also externally.
It's not always like the immediate thing about workers.
But when people understand, developers understand what they can achieve there, what problem they can solve with that, they're really excited about that.
And it's, it's one of those things that, okay, it's not easy to explain. But when you start using it, and you have a use case, you're excited about that.
That's quite interesting to see even from the outside of the developers area that people have this aha moment, and then suddenly they're using it for everything.
Workers is has been getting more and more areas more and more possibilities.
What would be the highlights you would do there in terms of the current workers compare with the one of the first year, the first two years?
Well, so many things change, right?
Yeah, lots of things. I mean, for me, biggest thing is is durable objects that enables so many use cases.
That was basically the thing that in my mind enabled people to write entire applications on workers rather than just configure your CDN.
It's like now you can coordinate on the edge, you can store data on the edge.
I say on the edge, it's funny, being on the edge is not actually the important thing about workers.
We kind of thought it would be more important to people early on.
But really, the important thing is just the ease of use. The fact that you can treat the whole network as one big computer instead of lots of little computers, and like you're programming one globe-spanning computer, which is sort of a cliche, but I think we've gone further along those lines than anyone else in the past.
Sun used to have the trademark, the network is the computer. And fun fact, when Oracle bought Sun, they let the trademark lapse, and then we registered the trademark a few years ago.
So now that's our trademark, the network is the computer.
Little troll move, but I love it. So, workers is all about... Actually, here's the 2019 blog post of John Graham Cummings explaining exactly that, the network is the computer, and how he bought that.
Workers is all about being able to program the Internet as one big computer instead of many little computers.
And things that have changed, so durable objects is a big one.
We, early on, we had these super low limits on how much time you could spend in a worker is 50 milliseconds per request.
Now it's 30 seconds, but effectively unlimited if you just have workers talk to each other and use durable objects and such.
There's all these features we've added.
Workers KV is another storage mechanism that's good for a lot of simple use cases.
And we had an outage last week related to that. Yeah. Unfortunately, workers KV was in the process of moving backends and temporarily had reliance on one backend that had an outage.
But we will continue moving it over to, hopefully, be entirely backed by Cloudflare in the future.
There's a blog post about that with some details.
And also, like, worker runtime, right? Web assembly, there's a bunch there specifically in terms of...
Yeah. We've added tons of APIs.
We added WebAssembly support. We added Python support using WebAssembly.
I would still say it's primarily, like, just being honest, it's primarily a JavaScript platform, but you can use other things through WebAssembly.
Yeah.
I mean, obviously... One of the things that I've seen a lot of excitement from developers is containers.
Can you explain to us a bit what is that and why are people excited?
Yeah. So, this is an interesting one because people have been saying all along internally, like, we should have a container platform.
Why don't we support containers or VMs?
Or why don't we have an EC2 competitor? And I have actually been saying all along, no, we actually shouldn't do that because we're not going to build something that's better if we do that.
Then, like, everyone who's doing this will just be yet another container platform.
We should only do it if we're doing it in a way that's new and interesting and better and melds well with, like, it can't be a replacement for workers because workers is the thing that's 10 times better than serving websites from traditional servers.
So, we don't want people just saying, okay, I'm going to bring my web server and I want to run 500 replicas of my stateless web server in containers and I don't want to use workers at all.
I'm just going to route the HTTP traffic to that. That's where, like, yeah, we could do that, but it won't be any better than the competition.
What I think is very interesting about our container platform that we'll be launching, I think, next week is it's attached to durable objects and it's really designed for stateful containers.
It's, like, if you have a use case where you need to run a container per user to, like, manage their session or, like, maybe it's a game server or something or a build session.
You want the user to be able to log into a machine and, like, run some build commands or you're running them on their behalf.
But all these cases where each container is doing a different separate thing from all the others, that fits perfectly with durable objects.
You start a durable object for each of these and it has a container attached and it talks to it.
And I think that's going to enable a whole lot of use cases that are difficult to do on most other container platforms because of the different focus.
So when that launches, I'm sure there will be some people who say, oh, great, now I can run my stateless web server on this and then they're going to be a little bit disappointed when they find out that that's not currently as easy to do as they'd like.
Eventually, that will also be easy, but that's not the focus.
So there's some more features we have to add, like, you know, elastic load balancing or whatever, before you can do that.
And we'll get there.
But what I'm really excited about is enabling the use cases that people haven't been able to do on other platforms easily.
I'm curious, on your perspective, we talked about AI helping developers, not necessarily being substantive developers, but where do you see all of these things that we talked about, containers, all of these new possibilities, going?
Like, what do you think could enable in the future that we didn't have before?
Do you see a path there?
Well, so I don't know about if containers are part of this necessarily, but I think the big thing that we're going to see changing in the next few years is there's going to be...
People worry about AI, meaning there's fewer developers. I actually think it's going to be that there's way more developers, you know, 10 times as many developers in the next few years as there have been in the past, because AI assistance will allow so many more people to become developers.
And what that means is that we will have people who are writing niche apps for niche use cases, often for themselves, using AI as an assistant, maybe in some cases, people who don't even know how to code.
But if they're building apps, I still count them as developers.
I'm not going to gatekeep that. So I think there will be this transformation in software where, like, you see a lot of software today that is a platform that's designed to solve lots of general use cases.
Think of Excel, Microsoft Excel, the spreadsheet.
People use spreadsheets for all kinds of things, tracking finances, like doing calculations, for to-do lists, you know.
And a spreadsheet can do all these things, but it's not great at doing any of these things.
Like, you can kind of, with enough tedious work, get it to do lots of different things.
And people do that because it's fairly easy to do, even if you don't know how to code.
But what if, in the future, instead of building a spreadsheet, you ask an AI to write a custom app for you for every one of these things?
AI could probably, in a lot of cases, write a better app than what you'd build in an Excel spreadsheet, in a lot of cases.
So all of a sudden, the demand for general platforms like Excel goes down, and the demand for very custom, bespoke, one-off apps goes up.
And Workers is probably the best place to, you know, millions of custom apps, because that's what we've been built for all along, is very fine-grained, you know, single, or massive multi-tenancy on our servers by making the overhead of each individual tenant very small.
Yeah, software's going to be very different in the next few years.
It will be really interesting to track and see.
And it's also interesting to be honest, like, seeing that something that was already there is actually almost built for something that wasn't there before.
Yeah, it's funny how all of our design decisions, all the vision over the past eight years has, like, come together to be, like, the thing we actually need right now.
I'm not sure if that's luck or foresight.
Probably a lot of luck. But here we are, and it's exciting. You probably didn't expect that when you were building it.
Obviously, I wasn't expecting it to write code for people.
I didn't believe that was possible until a couple of months ago.
So, yeah. Before we go, you started with LAN parties. I was saying the other day when we spoke, San Francisco, but you actually told me it was not in San Francisco that you started, it was before San Francisco.
Can you explain to us first, what are LAN parties?
And then, why do you love them and you do them so successfully?
Yeah. So, ever since I was a teenager, back in the days of Doom, I was in junior high in Minneapolis.
And my friends and I would get together and hook up two computers over a serial cable, two V6s, and play Doom two-player.
And it was just the most amazing thing that we'd ever done.
And we'd stay up all night doing it. And then on my 14th birthday, I managed to cobble together.
I acquired four network cards and installed them into, we had three 486s and a Pentium machine, and we networked them together.
And that was the first LAN party. So, we're playing four -player Doom.
And the person on the slowest computer, the 486SX25, 25 megahertz, was getting like two frames per second.
Whereas the person on the Pentium was getting this perfectly smooth experience.
And so, they'd beat everyone and it would be everyone against the Pentium player.
But it was just incredibly fun.
And we played it all night. And then we continued to have LAN parties periodically ever since, especially on New Year's Eve, every single year on New Year's Eve, since I was 14.
So, it's like almost 30 years now. Those friends from junior high still come to my New Year's Eve LAN parties every year.
I was in Minneapolis, Minnesota at the time.
I eventually moved out to the Bay Area for when I joined Google in 2005.
And in the 2010s, I built a house out there with help from my dad, who's an architect.
And we designed this little house on this little strip of land that I found in Palo Alto, but it was optimized for LAN parties.
So, there are computers built into the walls.
And then I started having LAN parties every couple of weeks and still big New Year's LAN parties where my friends would fly out.
And then when I moved to Austin a couple of years ago, I sold the previous house and I came out here and built a bigger one.
It's always surprising. First, those parties became quite popular with people attending, even becoming something on the news in a way.
Did it surprise you how popular they became? Well, so, LAN parties were popular back in the late 90s, early 2000s, where you really had to bring your computer.
Everyone had to bring their computers together and network them locally in order to get a great experience playing games.
Then the Internet got to the point where you can stay home and play over the Internet.
And so, then most people do that.
And most people say, well, why go through all the work to bring my computer somewhere?
But I find that I don't really enjoy playing on the Internet with random people on the Internet because there's some kids who beat me all the time.
I'm not hardcore enough of a gamer to win those games. And I'd much rather be with my friends.
And I do sometimes, you know, play, get everyone on a Discord server and play games on the Internet with my friends, but it's even more fun when they're all there in person.
And it's like the game is not actually the main focus. It's like a catalyst for a social interaction.
It's just, you know, normal people have parties so that they can hang out with their friends.
I and a lot of my friends are a bit too introverted to just want to go to a party and just chat.
But if there's a game, then we go and we play the game, and then we end up chatting.
And I think that works perfectly.
And so, this is the San Francisco, right? San Francisco.
That's the, well, Palo Alto, California. Palo Alto. Yes. And you did one recently that The Verge also covered.
Lots of places covered it, but you should go to landparty.house, which is my own site.
And all these people are just copying out of it, often with errors.
But yeah, it has all of the details, even how you built it, what it became.
It's quite interesting. Even on the technical side, it has a lot to see.
Quite amazing thing to explore. That's all my junior high friends there, hanging out and building all computers.
We built every single one of the 20 identical machines.
It's quite interesting to see that it's really a party where you do a lot.
You don't only play, you do a lot. It has a whole ecosystem, which is quite interesting to see.
Well, anything that we didn't mention about the future of workers, the future for developers that you think could be a good to, as a good leaving note?
Sure. There's an infinite number of things we could talk about off the top of my head.
Okay. That's a wrap. It was great, Kenton. Thank you so much.
And last but not least, one thing that you feel that people don't realize about workers, but they should.
You can build your entire application on it.
Use durable objects. You do not need any third-party cloud services anymore. It's like the whole ecosystem there, which is great.
And it will be way easier than managing VMs or whatever.
Actually, one note. If you were a young developer now, with the tools that are around, how excited would you be first?
And also, where would you like to start?
I don't know. It's so hard to tell what anything's going to be like in a year, a couple of years.
Like I said earlier, there's this whole opportunity now to build niche applications that weren't cost-effective before, because there weren't enough users that needed that thing.
But now, if you can crank them out, I think that's going to create a lot of value in the near future.
Makes sense. Thank you. And that's a wrap. It's done.