Originally aired on February 27 @ 9:00 AM - 9:30 AM EST
In this episode of This Week in NET (the second this week focused on building with AI), host João Tomé is joined by Steve Faulkner, Engineering Director at Cloudflare, to discuss how he rebuilt a Next.js-compatible framework in just one week using AI. The project, called vinext, began as an experiment and evolved into a working proof of concept.
We explore what AI-first development looks like in practice, how coding agents were used to rewrite and test large API surfaces, and what happens when you treat dependencies as something you can regenerate rather than maintain manually.
The results were surprising: faster local builds, smaller bundles, deployment to Workers with a single command, and a total AI token cost of roughly $1,100.
We also discuss:
• Using voice-to-code workflows (SuperWhisper + local models)
• AI reviewing code multiple times
• Whether AI-assisted rebuilds will become common
• What this means for 2026 and beyond
Mentioned blog posts:
Hello everyone and welcome to This Week in NET. It's the second episode of the week. It's all about those thinking of building with AI.
So we have this amazing use case of how we rebuild Next.js with AI in one week.
In this case, with one engineer doing that using open code and other tools.
And for that, we have Steve Faulkner, the engineer, in this case, engineering director, that was the one actually building this in only one week.
We talk about how AI was important, how this is actually an experiment, part proof concept and part glimpse into what AI -first development looks like.
There are improvements taking place. We also answer a few questions that were done on social media.
So stay tuned for that as well. On the Cloudflare blog this week, also worth mentioning, not only we had these two blog posts, but also Cloudflare 1 became the first SASE offering modern post-quantum encryption across the full platform.
Of course, post-quantum encryption is really important because quantum computers are coming and those will bring new challenges in terms of encryption.
So having a SASE offering is quite important with post -quantum encryption in play.
Also this week, already on Friday, we started a set of blog posts about security and why not start with Cloudflare Radar, new added tools for, one is for monitoring post -quantum adoption, quite important in this day and age, be post-quantum ready.
So there's more details on that. There's also key transparency logs for messaging and also ASPA routing records to track the Internet's migration towards more secure encryption and routing standards.
What is ASPA, you may ask?
That's a good question. It's Autonomous System Provider Authorization and it's the industry adopting a new cryptography standard that is designed to validate the entire path of network traffic and prevent routing.
So quite important and now an addition to security insights on Cloudflare Radar.
Also in the blog today, this Friday, we have a very cool blog by James Snell called We Deserve a Better Streams API for JavaScript.
So the Web Streams API has become ubiquitous in JavaScript runtime, but was designed for a different era.
So this blog post shows you what a modern streaming API could and should look like.
There's also a very interesting one about the most seen UI on the Internet, redesigning turnstile and challenge pages.
So we serve 7.6 billion challenges daily and here's how we use research AAA accessibility standards and a unified architecture to redesign the Internet's most seen user interface, which is the horrible capture that no one wants to do and replicate.
And last but not least, a blog post called Toxic Combinations When Small Signals Add Up to a Security Incident.
So minor misconfigurations of request anomalies often seem harmless in isolation, but when these small signals converge, they can trigger a security incident known as toxic combination.
This blog post explains how to spot the sign. Without further ado, here's my conversation with Steve Faulkner.
Hello, Steve.
How are you? I'm doing great. How are you? I'm good. I actually recorded a segment with Matt Carey about code mode this week.
He's in Lisbon, so closer to me.
Yeah, cool. Matt's great. Matt's on my team. Exactly. One of the things, for those who don't know, can you give us a quick run through of your experience at Cloudflare so far, when you joined, and what's your role really?
So I've been here for almost two years now, just about to hit two years.
I am the director in charge of workers.
So that's the whole workers platform and sort of a bunch of bits and bobs, kind of workers adjacent, like containers and agents and things like that.
So I've lost track at this point of how many teams, but it's about 80 people. I love working here.
It's been a blast. Yeah, it's been like workers is a very cool product, and that's why I joined.
So I'm excited to be working on it. This week, you had a very viral blog post on Tuesday that was published, and it was a blog post for a work that was done over the period of a week by you specifically, with definitely help from AI.
A few weeks ago, we had Celso here also talking about the project that took a week, in this case, markdown for agents in that situation.
This also a week. What can you give us in terms of how this idea of rebuilding Next.js came to be and why is it relevant?
Yeah, definitely. I could talk about that.
So I think this idea has been like floating around the back of my head for quite some time.
So Next.js is basically the most popular React framework out there, almost synonymous with React at this point, has its own bundling tool system, TurboPack, kind of its own bespoke tool chain.
And Vercel has invested very heavily in this in the past, you know, three, four years, even going back further.
Sort of in that time, we've seen sort of a different take on that tool chain evolve in Vite, and most other frameworks use Vite.
And so I think at some point, you know, people start asking the question, well, what if, you know, what if Next.js was just on top of Vite?
You know, Vite has all these like plugins, kind of a whole ecosystem around it, versus, you know, TurboPack is just sort of used by Next.
And now Next is a very, you know, there's a lot there going on there.
It's a complex framework. There's a big API surface area. And so the idea of this just kind of seemed impossible, right?
Like we actually discussed it internally, maybe about a year ago.
We were trying to figure out how to best support Next users on Cloudflare, how to make them happy.
And, you know, this idea came up.
We kind of batted around and, you know, said, well, that's just not going to work, right?
Like we're not going to be able to invest that level of resources in it.
And fast forward to now, AI just got really good, right? That's kind of what happened.
I've been using AI for a ton of things at work lately, as I'm sure, you know, everybody has.
I've been pushing the limits of what it can do. And so I think it was literally like Friday afternoon, Friday evening of about a week and a half ago.
I was sitting there. I said, you know, like, I don't know. Let's just see what happens, right?
I mean, like I'm always interested in these like new ways of using AI, like vibe coding things or like Ralph Wiggum loops.
And I've been trying to like all these different, you know, new technology.
They're like new skills and things like that, right?
And I just sort of started throwing open code at the test suite for Next.js and said, let's just do it on beat.
Let's make it work. And then about 24 hours later, I was like, oh, wow, this might actually just kind of work.
I mean, there's, you know, there's edge cases and it's a week old project, but I had AppRouter working within like a couple days and I was like, okay, this might actually be something that, you know, has legs.
One of the interesting things is it was since a few months ago, we've seen this upgrade in terms of the tools, what the tools, especially for coding, can do.
And can you share with us like the setup that you typically use in terms of what are the models?
What are we use, for example, open code here specifically as a UI as well?
And what difference those really make?
Yep. So open code is the primary way I do this. I've pretty much used all the tools out there.
I'm actually like, I don't have like a lot of preference right now.
It's just, you know, what we use at Cloudflare because we kind of have our own open code setup.
But I mean, I've used Codex and Cloud Code and those have some great things too.
Open code combined with Cloud Opus 4.6. Actually, I think 4.6 came out during by me working on this.
I think I started with 4.5 and then sort of moved to 4.6 when that came out.
And that's pretty much the bulk of it.
I mean, I've dropped in Codex 4.2 here and there to just like double check some things or maybe to have an all different take on, you know, a problem.
But largely that's the setup.
I use open code desktop, especially like it has really good like work tree setup where you can sort of just set up work trees really easily.
And sometimes I've got, I mean, 30, 40 work trees going. Maybe five, six, seven agents running in parallel, all working on different problems.
So it makes it really nice to sort of like a nice interface for working in that style.
When was the period where you saw, okay, these things are really making a difference, making a real work, making things work properly.
When was that? Because a year ago, I think, was not as it is right now, right?
I agree, right? Like even a few months ago, I would say like really similarly to a lot of other people in the industry and at Cloudflare, like kind of mid-December into January.
I kind of, you know, came back from the holiday time and I got to play with it a little bit over the break and I was like, wow, these things got really good.
I mean, before that I had used a lot of this stuff, but I just never was like that impressed.
I would, you know, so I was kind of in the mild AI skeptic camp.
And then once I started using it, like I said, like mid -January, I was like, whoa, right?
These tools are really like capable of doing a lot.
And I made a lot of mistakes and did a lot of stuff that it didn't do well, but I started finding out what it was good at and started doubling down on that.
So, and I would actually say that my first probably month was using it for a lot of non-coding things, right?
So, I'm a manager, right?
I actually, it is a little bit odd that this project came from me, right?
It just sort of happened to be my idea when I ran with it. But I use OpenCode a lot for tracking meeting notes and keeping track of, you know, various writing I'm doing and, you know, projects and I have essentially a folder full of Markdown files that is all organized to like keep my brain intact.
And that was really probably the first, my few weeks with this is like non-coding use cases.
And then once I started doing that, I was like, oh, like I have some extra time.
Let me try and see if I can do some other stuff.
One of the things that is quite clear from this project is how quick it was implemented.
And of course, we have a warning specifically in the blog that this is experimental under heavy development.
So, it's definitely like a proof of concept where we show what we were able to do with AI specifically.
And the team is actually adding improvements. So, it's like a continuing work, right?
Specifically. Yes, correct. Yeah, we've merged a ton of PRs already, fixed a ton of stuff and all still via OpenCode.
Like, I think a lot of people are focusing on this because of Ovit and Next.
And I get why.
I mean, it's a very like interesting story and it's sort of a simmering question that I think has been out there for a long time in the front end space.
But it's frankly not the part that's interesting to me.
The part that's interesting is like, what does AI first development look like?
I mean, how can you, like, what does it mean to live in a world where like your dependencies are easily rewritten or replaced or migrated or any of those kinds of things, right?
Like, what abstractions are going to fall away?
What abstractions are we going to keep?
And I don't know the answer to that. I think not anybody does right now. The line is going to get really blurry.
So, that's the interesting part to me. And then also, like, we've really gone like AI first for this like entire experience.
Like, this isn't for us to see as, not just for me, but like as a team, how we can push AI super far.
As I said, all the code is pretty much written by AI. I think that's just sort of true, like with very few exceptions.
All the code is reviewed by AI and sometimes, in some cases, multiple times.
This is maybe something that is a little unobvious to people, but if you just tell AI to review code like three times in a row, it actually just does a better job, right?
Like, it picks up more context every time and then sort of like lands on something good.
So, yeah, AI, code review.
Even like when I publish a new package, I use an AI thing to do that.
I don't go click a button, you know, right? So, it's really like this exercise and just like how can we use AI for everything and what does that mean for building and maintaining software?
Makes sense. One of the things Next.js is the most popular React framework, as you said, and millions of developers use it.
So, there's good reasons to make it better and have this VNX perspective.
Can you run us through the numbers in terms of improvements that we were able to achieve, like four times faster builds, 57% smaller bundles?
Those savings first, who they serve and why are they relevant and then how we did it, really?
So, they're relevant because, you know, like actually let me start at the beginning.
I don't think any of this was quite the goal of the project, right?
I was just literally trying to change tool chains and see what happens.
I didn't know it was going to be faster in any way, right?
And I'm going to caveat these benchmarks, which is like we spent a lot of time trying to make sure these benchmarks were fair and accurate.
But, you know, benchmarks are always going to be tricky to get right.
And so, I'm sure we're going to see these change over time.
I'm sure we're going to see both projects improve.
But I think the big thing that I took away once I started, you know, working on this was this is really about VEET, right?
VEET is an excellent foundation to build on.
I think when I finally did, you know, probably like four or five days into this project is when I actually started doing benchmarks and said, oh, let's see how this performs.
And I was like, oh, wow. Like I didn't know it was going to be better.
And I think that says a lot about VEET the project and sort of all the work they've put into making it a good foundation for all these frameworks.
Makes sense. Makes sense. So those were not intended, but they definitely are helpful for those that are using it really specifically, right?
Exactly.
Yep. And so as far as the actual numbers, I mean, the blog post covers it. And like I said, I assume we'll see the numbers change as sort of we tweak things and make improvements on both sides.
The biggest benefit right now is probably around the developer experience, right?
I mean, it's just got, you know, sort of better local dev experience in terms of like build speed and things like that.
And so I think that's like a good starting point where like, why would I use this?
Well, try it locally and, you know, see if you have a better time.
I'm encouraged by some of the production stuff.
But again, that's not like the primary reason I'm doing this.
At the end of the day, Next.js is still React under the hood. This is still React under the hood.
This is still RSCs. I assume that, you know, both projects will learn from each other and we'll get to some point where probably they'll get pretty similar, right?
Because it's not like they're doing anything drastically different.
You know, at the end of the day, they're doing very similar.
And the other thing is the cost, right? $1,100 in tokens specifically. That is also relevant in these types of deployments, right?
Yeah, 100%. So that number comes from me just going back through using OpenCode to go back over all my OpenCode sessions and then figure out, you know, what we spent.
So it's not perfect, but it's a pretty decent estimate, give or take maybe $100.
I was definitely kind of shocked myself at how low it was.
I was doing this and, you know, we have pretty lax policies here in terms of what you can spend where we encourage people to, you know, use OpenCode for a lot of things here.
And so I wasn't really worried about the cost.
It wasn't, again, something I was concerned about.
And I kind of expected it to be like $10k or something like that. I was like, I was running a lot of sessions and I was expecting that number to be.
And I was kind of expecting to have a send a message to our CTO day and be like, hey, you know, I spent a lot of money over the weekend.
Sorry about that, but it was on a cool thing.
So when the cost ended up being so low, I thought it was cool. I think the cost is, it speaks more, again, to like just the literal dollar cost of doing these kinds of projects, right?
Like if I said this in the blog post, if I can spend $1,000 and, you know, sort of build an entirely, you know, new implementation of an API surface, you could spend $10 or $100 and build your own framework or build something that's, you know, unique to you or migrate to a framework that, you know, you, you know, think might fit you better, right?
Kind of how I would encourage people to think about this is that, you know, don't necessarily think like, like I have to use vNext, right?
Like take a broader view and say like, well, all this stuff is cheap now.
So like, what's the right decision in terms of like what technology we're using?
And also the perspective of deploying to workers with a single command.
Also, the easy part is also interesting. Yeah. So that, that part, we can talk a little about that and the, you know, the plan there.
So I started actually the project as a generic project to just run in Node.
That's actually how I started.
I, I sort of got down a little bit of a rabbit hole with that.
It was working out, but it was sort of proving to be a little bit difficult to make it work with that and workers, you know, like, and try to like keep parity between the two.
Workers has great Node compatibility now, but there's certain things like native modules that don't work yet.
And, you know, things that are on our roadmap.
It was one of those things where I was like, okay, well, let's just take a step back.
And I really want the demo to be cool. So I was like, let's just focus entirely on workers.
And so that's where I took the project. And that's what it does. Now.
V deploy or sorry, v-next deploy, just deploy straight to workers. We're going to change that.
We've already got an open issue and I'm soliciting feedback about what that's going to look like.
Because 97% of the work here is actually about making it work with Vite and not about making it work on workers.
And so we're going to add some sort of like pluggable layer where like any provider can come in.
We already have Netlify has PR'd a proof of concept. Pouya, who's on the next team, PR'd a Nitro plugin.
So Nitro is another framework, like sort of lower level framework that can, is a V plugin and can make this, basically allows us to deploy to almost any host.
So we're going to work through this. We're going to figure out how to kind of make it work everywhere.
And to be clear, it does now. We have several people that have, you know, privately DM'd and said, oh yeah, it's running on my Node server.
Like no problem. One of the things that we could see is first implementation, people trying to implement and with that feedback, what was the thing that surprised you the most in terms of feedback, but also people just implementing it?
Like people who are actually using this, like now that we've seen it?
Well, I mean, obviously there was a lot of response to the post, you know, I think I knew it was going to be controversial when we did it.
So there's not surprises that it's caused a little bit controversy.
Um, I, I think people want to read into this, like as if there's some big, you know, sort of grand strategy here.
And it really is just like a week old experiment that I said, let's see what happens.
But the result ended up being really good. And the feedback we've gotten is pretty good about people actually using it.
So we talked about this in the post. We have like one large customer that's already using it on one of their beta sites.
We've gotten tons of interesting DMs from people that are already trying it on their sites and some other customer things.
I mean, like other stuff I can't talk about.
But I'm pretty encouraged that people are really trying this out and they are using it and having some success.
Um, you know, I, I will say this again. There are definitely bugs.
There are definitely things that don't match Next right now. And we know that and we're going to go try to fix them.
Um, but right now, if, if, you know, if you have like a, I would say sort of a relatively, um, uncomplicated Next.js app, it's pretty straightforward.
It does kind of work. And we have examples here.
We've been working with National Design Studio. As I mentioned, that's trying to modernize every government interface.
So there's already things in terms of implementation.
Uh, but as it's a very cloud-flaring thing of iterative, making it better, improving, sharing with the world for even people to collaborate with us and actually help us make it better.
And that's, uh, we've already seen that too.
We've seen a bunch of PRs from the community, uh, people filing issues. Uh, I lost track, but, you know, we've probably, you know, merged like 20 or 30 PRs over the last couple of days.
So lots of people finding bugs and stuff we're going to fix.
Do you see in terms of even the future and the ecosystem, do you see this pattern like framework core, uh, and AI-assisted rebuilds becoming more common across other ecosystems?
That is a great question. And I, I think probably yes. Um, I, I think that we're going to see this for more projects.
We're going to see more things rebuilt using AI or, um, you know, built differently.
Uh, again, like what, one of the things I've gotten is we've gotten some people that have come in and said, oh, can you, can you tweak this next behavior?
We want it to work differently. And so far I've been kind of holding off on anything like that.
I've said, you know, Hey, look, the goal here is parity one-to-one.
We don't want to break people's apps when they move over.
We want the, to work as it, as it worked before. But, uh, you know, my response to that is if you want any framework to work differently, you can, again, just spend some tokens and make it yourself.
So I think this will happen to others, open source projects.
Um, I, and even non-open source projects too, right?
Anything that has like a good, well-specified surface area or test suite.
I think this project coming from me and from Cloudflare is just sort of frankly a coincidence, right?
I mean, maybe cause I, you know, had this problem in the back of my mind for the past couple of years and I knew it was something I wanted to try to tackle.
But if I can do this as an engineering manager in my spare time over a week, anybody could have done this.
Right. And there's no magic in the prompts, right?
Like I can't share them all, but they are not fancy.
I do a lot of voice-to-text. They are me just yelling at the computer to make it work better.
Right? That's what it is. And so people are going to keep doing this kind of stuff.
How do you do that voice-to-computer? SuperWhisper. I'll, I'll give them a shout out there.
Uh, uh, SuperWhisper is the app I use. I do a bunch of stuff with them, uh, use their local models.
So Parakeet is the model. There's a good voice-to-text local model that you can, you can, Parakeet is by Nvidia.
So it plugs into a bunch of things. There's other apps out there, but SuperWhisper is the one that I use.
I'm a fan of it. And to be honest, I also love to talk with it, explaining what I want, like the structure and things like that.
And it's crazy.
The, the voice to something, to act to something, speaking many minutes.
They're like, you're explaining what you want with detail. It's better, I think, than writing.
It's very good at taking, LLMs in general, very good at taking like unstructured raw input and then structuring it.
So I can sit there and talk for 10, 15 minutes about a problem and just kind of brain dump my thoughts.
And if you look at what's written down, it's never something you would write by hand, right?
So it's just like almost unintelligible, but the LLM can take that and say, Oh, okay.
I understand what you mean. And like, I'll go do that.
Yeah. It's quite amazing. Apparently OpenClaw, the founder of OpenClaw, uses also the dictation a lot, speaking, which is interesting.
I didn't know that.
That's cool. Like, yeah. Nice. That's true. Apparently he uses his phone for that.
Oh, well. What's next for VNEXT? Is there like something that we can share about what's coming?
Although it was a week. It's so short, but is there something we can share?
So we're definitely going to keep investing in it. I mean, one of the things that I said this in one of the GitHub issues, because we got a lot of questions about this, but I actually can speak with authority here.
I'm not just an IC engineer who's going to have to like beg for time to work on this.
I am in charge, which is kind of a nice position to be in right now where I can say, okay, we're going to carve out a plan to actually like make sure we have people working on this.
It's provided enough value for actual customers that I think we're going to keep working on it.
We're going to keep making sure that we fix bugs.
We're going to keep working with some of the other providers to make sure that, you know, we've got this nice, some interface where you can deploy other places.
But yeah, we want to keep going. We have, I already asked a few questions that we got from social media.
One was that the plan to keep this maintained long term.
So apparently that's in the cards. In terms of the implementation of this in production environment, do we have any advices to folks on that regard?
I would say that, you know, be cautious.
I mean, I've said it's a week old, right? So we are already, you know, finding vulnerabilities and we've already had some reported and we've already fixed some.
And so like I would say, you know, just like any new project, you should exercise caution.
All software has bugs, especially brand new software that's a week old.
And so I am encouraging people to be cautious about what you're doing and, you know, what environment, especially like, you know, what things your app is doing.
You know, if your app is entirely static, you have a different surface area of vulnerabilities than if you're doing like a lot of server side actions or things like that.
But, you know, security is something we take very seriously and we're going to continue fixing bugs as they get reported.
Another one. How can we take this to the next level by having SSG by default, like Astro?
Yes, I addressed that in the post. This is probably like the biggest gap between Next and I acknowledge this in the post is that SSG doesn't work.
Because part of the reason there is Veep is not have SSG or server side generation out of the box.
It doesn't really have an opinion on that. It's not part of what it's designed to do.
And there's a couple paths we have there. Number one, I think we will just implement server side generation at some point.
Got to think through a little bit how to do that, talking with the Veep team, because there's other frameworks that do this.
So we'll kind of look at best practices and make sure we're like lining up with what everybody else is doing.
But then we also talk about in the post, this other idea that we've come up with that is under an experimental flag called TPR.
So traffic aware pre-generation. So this is basically like an alternative take on how to do server side.
Sorry, static is using your live site traffic in order to inform what pages need to get built.
So I'll recap what I said in the post.
See if there's a hundred thousand pages instead of building all of them, which may take a long time.
Let's build the 500 pages that we know get 95% of the traffic.
Right. And then everything else will be generated on first hit and then straight from in the future.
So that's something that it was part of the release.
And so we have, like I said, very early, very expensive experimental version working with our zone analytics API.
But I want to probably flush that out and make that work with other providers, too.
That doesn't need to be a Cloudflare specific thing.
I think it can be a generic thing that if you deploy to someplace that has those analytics, they can make that available.
Makes sense. The other ones were WASM support.
And will this eventually replace OpenNext? So WASM support.
So that is going to probably be something we'll have to figure out. Worker supports WASM.
So, you know, it's like in our interest to do that as well. We got to just figure out how to, like, make that work with VEED and kind of make it work generic and everywhere so that it works in all the different runtimes that this thing could be run on.
And then for OpenNext, still investing in OpenNext. That has also been like a project.
Honestly, if you're worried about like production and like sort of battle tested Harbin software, you should just go use OpenNext, right?
We've invested a lot of resources in that in the past couple of years.
We did a huge release of this last year that really got it up to parity with sort of where it needed to be.
And so I would say if you're like looking to deploy a production site and you're worried about this, go use OpenNext.
It's great. We're going to continue to invest there, too.
Last but not least, if we're actively trying to use VNext in our consumer facing products.
That's a great question. Not at the moment.
We just don't actually do a lot of Next.js. So, you know, our stuff is basically most on either just React itself.
We have some Astro sites. We have some TAN stack sites as well.
But we just don't have a lot of Next in our stack. And so there's not really a thing there to replace it.
Before you go, just one question.
Where should people start if they want to interact with this? And also a key takeaway you think would be the takeaway of this blog, of this project.
For starting, I will tell you the best way to get started with this is use AI, right?
So if you go and say, you know, VNext init manually, I'm sure like the JavaScript ecosystem is so complicated.
There's so many packages out there. There's so many various config things and options and runtimes that I know there's bugs.
I know there's cases that don't work.
So if you start with AI, I will navigate those for you.
And then the best part is you can just tell AI, please file a bug on the VNext repo and we will go figure it out.
And usually AI will provide a good reproduction of that.
So I would encourage people to use AI to start with this and just open it, open up your project, open up your Next app.
Say, point it at the repo and say, migrate to this.
And then it will work through the issues. And that helps us.
And I think it's the best getting started experience. Big takeaway, I covered it kind of earlier, but I think the big takeaway here is really around AI.
Like AI is going to change how we build software and how we maintain software.
And I don't claim to know how, but I think this is just another data point in the journey we're kind of all going to go on this year.
Many things to unpack this year with that.
So thank you. This was great. Thank you very much. And thank you for the project.
And it's done. It's a wrap.