Originally aired on February 13 @ 12:00 PM - 12:30 PM EST
In this episode, host João Tomé is joined by Celso Martinho, VP of Engineering at Cloudflare, to discuss two major launches: Markdown for Agents and Moltworker (for OpenClaw) — and what they signal about the future of AI agents on the Internet.
Celso explains how Markdown for Agents was conceived, built, and shipped in just one week, why AI systems prefer markdown over HTML, and how converting a typical blog post from 16,000 HTML tokens to roughly 3,000 markdown tokens can reduce cost, improve speed, and increase accuracy for AI models. We also explore Moltworker, a proof-of-concept showing how a personal AI agent originally designed to run on a Mac Mini can instead run on Cloudflare’s global network using Workers, R2, Browser Rendering, AI Gateway, and Zero Trust.
We discuss observability for AI crawlers, new monetization models for publishers, the rapid growth of agent ecosystems, and why AI is becoming less hype and more infrastructure.
Mentioned blog posts:
Hello everyone and welcome to This Week in NET. It's Friday the 13th, February the 13th 2026 edition and this week we're going to talk about AI agents, OpenClaw and friends and also how agents seem to perform better with this old language that is over 20 years marked down.
For that, I have returning to the show Celso Martinho, a VP of Engineering with a few great products in his hands.
As usual, I'm your host João Tomé and this is a conversation recorded in Lisbon, Portugal.
Hello Celso. Hey, how are you?
Thanks for inviting me. You're back to This Week in NET. We had an episode more than a year ago.
It's been a long time. A long time. Two exciting blog posts that we published recently.
One today, today we're recording called Introducing Markdown for Agents and the other one called Introducing Moldworker, a self-hosted personal AI agent minus the minis, which is related to the Mac minis that many are using to try to make OpenClaw work.
Why not start with a blog post from today, the Markdown for Agents blog.
Fun fact, how many days it took the team from idea to execution to do this?
This felt very good because we started discussing adding this capability to our network like last week.
It was one of those magical moments where all the teams got together.
We came up with a plan. We brought out some really talented engineers from a couple of teams and we were able to go from idea to ship the product today in about one week.
It felt really good to do something like this.
In what way AI also helped to make it faster? Well, I'd say AI is helping everywhere these days, but AI alone doesn't work without talented engineers behind it.
And that was the case. For those who don't know, first, why is Markdown in the case of agents of LLMs important?
Why is it different? Is it an old technology over 20 years of language?
Well, Markdown is not new. It's been around for quite a few years now.
Everybody uses Markdown for a number of use cases.
It's especially important for LLMs and AI because LLMs care about semantic value.
They care about the content itself. They don't care about the package, everything that's around the content.
And if you look at the web today and you're looking at a page, take our blog post, for instance, you have the content, but you also have navigation elements, images all over.
Sometimes you have advertising.
And typically the web is not made of Markdown. It's made of HTML. And the problem with HTML is that it has a lot of packaging.
The letter, I'm using a metaphor here, is just a small percentage of the package that HTML is.
So Markdown tries to solve that problem.
And what it does is it takes away all the irrelevant elements of the HTML page and focuses on the content itself, which is what the LLM and AIs need.
So Markdown has organically become very popular on the AI world. Every AI pipeline, every AI system is converting HTML to Markdown today.
And so what we thought about doing is why don't we make this really easy for our customers?
To convert.
To convert. And instead of putting the burden on our customers to convert their content to Markdown, if they want to make it available to AIs and LLMs, why don't we do it automatically using our network?
And so that's what we did. So today, our customers can go to the dashboard and just click a button.
And if they have their websites with us, they just enable Markdown for agents.
And then what happens is when any AI agent or AI system tries to fetch content from our customer, if they say in the request that they prefer Markdown to HTML, then our network will automatically and in real time efficiently convert HTML to Markdown and serve Markdown to the agent directly.
So this has a number of benefits. It's very easy for our customers to serve AI agents.
It's also cost efficient and stops with the token waste, which is really important when we talk about AI costs.
So that's it.
The blog post actually has a number related to the actual blog post in terms of if you're reading this blog post, it takes over 16,000 tokens in HTML and much less 3,000 tokens when converted to Markdown.
So that's important for you not to pay too much to the LLM.
It's very important because there's this thing called context windows in AI.
They are limited. They should not be abused. Big context windows with irrelevant tokens also make the AI slower, less accurate.
So everything you can do to save tokens and take away the things that have zero semantic value out of the context window, the better.
In one way, the blog post explains many things, how it was built as well, and the things that you can use even to check for yourself.
Do you have something to show in terms of the blog that could be relevant for people?
I mean, I can show the diagram of how we built this.
Cloudflare is in a privileged position because we have all the building blocks to do something like this.
We have a global network, which is very flexible in terms of how we can plug logic and how we can manipulate content that goes across our network.
And then also we have workers, our workers platform. So what we did is like a mix of both worlds where the global network is aware of the agents requesting markdown in the request.
And if that's the case, then it forwards the request to a worker that does the actual HTML to markdown conversion and then serves the markdown to the agent.
So it was one of those cases where we had everything that we needed.
We just needed to put all the pieces together and make it work.
So I won't ask you for feedback. Specifically, the blog post also mentions radar, new charts on radar that are related to markdown as well.
Yeah, so we do know that markdown usage is increasing.
We've been looking at what agents frameworks and coding tools, AI coding tools are doing.
And we've seen that increasingly they're trying to fetch markdown from the web instead of HTML.
So we think the trend has started, but we wanted to track that over time.
And radar is the ideal place to track Internet trends.
So again, one of those situations where we went to the team, hey, can we start tracking content types across the Internet over time?
And they said, sure. How much time do we have to build this? And we told them one week and it's done.
It was fast. Do you have some demo that you can show in terms of how it works?
Yeah, I can maybe just... From the dashboard or even...
Get my terminal. And if I go here, and so what I'm doing here is I'm fetching the blog of the announcement that just came out.
And I'm basically simulating a web request using curl, which is a popular command line tool.
And I'm including the special other except that this is what we call the content negotiation other.
And what I'm saying in except is that I prefer markdown to HTML.
So I'll actually do the two examples. So if I put the normal HTML, I will get HTML in the response.
And you can see a lot of things that don't matter to AI, HTML tags all over.
But if I do the same request and I put that I prefer markdown instead of HTML, what happens is I get a markdown version of the same content.
Cleaner. And it was automatically converted by our network. So that's it. Quite amazing.
We didn't discuss this at the beginning, but before going into Mold Worker, why don't you give us a glimpse for those who don't know and didn't check the episode that we before in terms of your many different projects at Cloudflare.
Over a year ago, those were different. Now, you have actually even more in some situation.
What is the projects you oversee here? Right now, again, the kind of question that we need to put a date on.
I think that the teams in Lisbon are doing the usual structural projects like workflows, email, radar.
We've talked about those.
And then a bunch of AI features, which by the way, it's not just us that are building those features out of Lisbon.
A lot of the teams at Cloudflare are doing AI features and products, but we're doing a lot of those as well.
So two examples of that are AI search, which is a very important feature for our customers where they can easily build search engines for the AIH.
There's an exciting roadmap of new features coming up.
We have paper crawl, which I think we also talked about it in the past.
It's still ongoing. Lots of new ideas for the upcoming months.
And then other smaller improvements that we're doing. And in the middle of that, things like markdown for agents.
About paper crawl, actually, that's a popular one, even for publishers, because it's all about blocking crawlers and making, almost creating an industry there in terms of crawlers paying to content creators.
It's not so much about blocking crawlers. It is important to say that we don't have anything against crawlers, nor we want to incentivize anyone or our customers to block crawlers.
It's more about knowing what's going on, understanding how crawlers and AI systems are using our customers' content, and how can they monetize that fairly for both parties.
So I think what we're trying to do is two things.
First, provide observability so that everything is clear, the metrics are there, there's no discussion about what's going on.
That's very important, because what we've seen in the past is that people don't know what's happening.
And so observability first, and then creating frameworks to try to come up with a new business model for the Internet, because the advertising business model is not sustainable for a lot of content creators.
So we're trying to come up with new ways to fairly monetize content.
Makes sense. About Moldworker here specifically, before we go specific to that, why don't you explain a bit of OpenClaw and what does it do, and why it has become such a viral thing created by a guy in Austria.
Actually, it started as Clawedbot, and then in the week we launched, it changed to Moldclaw, right?
Yeah, and now OpenClaw. No, Moldbot and now OpenClaw, exactly.
So the Moldworker is actually related to that, to the second name.
Yeah, it was funny that day because I woke up in the morning and a friend of mine sent me a message saying, have you heard about this, what was the first name?
Clawedbot.
Clawedbot. And I told him, actually no, because it's trending on Hacker News.
I don't know about it. And like 30 minutes later, I'm not kidding, our CEO, Matthew Prince, told us in the chat, hey, we could build this on top of, this is becoming popular.
Everyone's talking about it. We could build this on top of Cloudflare.
Why don't we do it? And so I went from not knowing what Clawedbot was to becoming an expert and trying to port Clawedbot to work on top of Cloudflare in a few hours.
So again, one of those magical moments where we came up with a couple of very talented engineers that were motivated to respond to our CEO.
And we did a bunch of glue.
So we didn't change Clawedbot, it's the same software package as the official open source project is.
For those who don't know, what does it do?
It's an open source personal AI system that you control, right? Yeah. There was a lot of discussion that people need a Mac mini, an Apple Mac mini or a VPS to make it work inside a container.
Yeah. So I think it was the first project to explore this idea of having private AI personal agents, especially background agents that you can interact to using your own resources, your own AI models.
It is your own personal AI instance.
And I think it also became very popular because they came up with hundreds of skills for the assistant that can do pretty much everything, like talk to chat applications.
WhatsApp, Discord. Social networks.
It can control the browser and browse the Internet. And so it had like a community, exponential community effect.
And it became very popular very fast. So what we did was basically we built a bunch of glue around the official open source package and we made it possible to run on top of our network instead of you having to buy dedicated hardware or a Mac mini.
Everyone was talking about buying Mac minis to run a Cloudbot.
And so we made a version of that where you didn't need your own personal computer to run it.
You could just run that on top of Cloudflare. And less expensive as well.
Yeah. But it is important to say that it was more of a proof of concept that this was possible and that now we have such a complete stack of APIs and products that make things like this possible.
But at the same time, if you ask me if this is the ideal way of running a background personally assistant on top of a network like ours, I would say it's not.
I think the solution will be to use more of our native APIs or to rely on a framework like Agents SDK, which is something that we're actively working on.
And I think in the future that will provide the necessary tools and APIs to build something like, what's the name now?
OpenClaw.
OpenClaw. But even more efficiently and cost effective. And there's a security side as well because it's a very raw tool in terms of vulnerability.
Some were discovered. So having some security present. Yeah. I mean, I'm not going to discuss the team's security decisions.
It is important to say that this runs on a sandbox.
So we're discussing security inside a sandbox. It's contained.
It's contained. It's not, in my opinion, the ideal model. But again, it runs inside a sandbox.
So it offers some protection. If the end user of the AI, since this really knows what they're doing and they understand the technical trade -offs, should be fine.
If that's not the case, it can be a problem. In terms of practical use, do you have any examples from the blog that you can show about MultiWorker?
Yeah. I mean, I can show you the blog. It has some examples. So we tell people how we built it, which was the main goal of the blog post.
So the architecture is basically, we use a bunch of products, including Zero Trust for authentication.
Then we have a worker that simulates a bunch of things that the official OpenClaw package needs.
For instance, the OpenClaw needs a browser to browse the Internet. And what we did was a proxy between the CDP protocol that OpenClaw requires to operate the browser and our browser rendering product.
So when OpenClaw needs to use a browser, what it's actually using is our browser rendering APIs.
And so this is just one example of the things that we ported to use Cloudflare.
Another thing is OpenClaw is expecting storage, like a hard disk in your Mac mini.
And so we use R2 to emulate that storage device.
Another example. So things like that. And for AI, for instance, they typically connect to an AI provider like Cloud or Entropic or OpenAI.
And what we did was we added support for AI Gateway, which is another Cloudflare product.
And through AI Gateway, you can say what AI providers you want to use.
And then you can have more sophisticated logic on top of that. And you can have like fallback models.
You can do caching. You have better observability on how the model is being used.
So there's a bunch of benefits from using AI Gateway that now you can take advantage of.
It was not done for this, but it's actually perfect.
It showcases how powerful and flexible our platform has become. And that was the main goal.
This is not a Cloudflare product. This is not a Cloudflare feature.
It was also important for us to make that clear. But it was an excellent opportunity for us to show that now you can run very complex and demanding applications on top of our stack.
It actually also talked about the sandbox container there and also some of the applications, Discord, Telegram, in terms of writing things for you.
You can give them like a prompt. Yeah, we put a few demos here.
We connected it to a Slack instance. And so you can go on the chatroom and you can start talking to...
One of the more complex things we did was we...
Where is it? It's here. So we told Moltbot, at the time that was the name, to go on our developer documentation website and just browse around, but to record the video of that.
And so what Moltbot did was it started browsing, user browser rendering.
The browser rendering API is one of our products. But then at the same time, it downloaded FFmpeg, which is a very popular open source software, to the sandbox.
It took frames of the developer docs pages, like screenshots, using our APIs.
It put the screenshots in files in the disk, in the R2 APIs, and then it used FFmpeg to convert the screenshots to a video.
And it gave us the video and it posted the video on Slack.
And it took him like 30 seconds, one minute to do this. That's quite impressive.
And you just asked for a video and it got to understand what it needed to do to achieve that.
Yeah, so I want a short video where you browse through Cloudflare documentation, send it to me as an attachment, feel free to download Chrome and FFmpeg.
By the way, when we said download Chrome, that's when we trick Moltworker to think that Chrome is browser rendering.
So behind the scenes, that's the glue we did.
It's quite amazing how it actually sorted things out. It's like having an intern that is actually very capable and it does things for you, especially if it's on the web.
That's kind of an insult to interns, but yes, I understand the metaphor.
A very good intern. Anything more? No, that's it. I mean, Moltworker also became hugely popular.
We now have like 10,000 stars on our GitHub repo.
We made it open source and we're still supporting the project just out of respect for everyone who decided to try and experiment that.
So we're still fixing bugs and improving things.
But again, I want to say this twice.
This is not a Cloudflare product. It's a proof of concept. And people should keep an eye on Agents SDK because it's going to be amazing to build things like this in the near future.
Makes sense. Regarding the examples that you've seen that were more mind-boggling, may that be Moltworker or even other Agents?
What are the use cases you have seen?
I mean, ever since the year started, I'm constantly being surprised on a day-to-day basis on how people are using AI, how sophisticated AI coding and personal assistant tools became.
It's just unbelievable. And I think I'm not alone.
You probably feel the same. I feel the same, yeah. Everyone's using AI and I don't think AI is taking anyone's jobs.
It's intensifying everyone's jobs in a good way.
And I think that if we stop being skeptic about AI and we just embrace and we know how to use AI, it can really become a very powerful tool for our professional lives, yes.
Makes sense. So you don't think it's overhyped? I think we've passed that phase.
I think now we're realizing how useful it can be. It's a long discussion, but I'm more on the side of embracing now than being skeptic.
And that's it. Thank you for inviting me. Thank you for being here. And that's a wrap for this week.
Visit thisweekinad.com to subscribe to the podcast, wherever you listen to podcasts, and explore nearly four years of conversation about how the Internet works and evolves.
For deeper technical dives, check out the Cloudflare blog.
See you next week.