This Week in Net: Developer Week (AI included) special edition
Welcome to our weekly review of stories from our blog and other sources, covering a range of topics from product announcements, tools and features to disruptions on the Internet. João Tomé is joined by our CTO, John Graham-Cumming.
In this week's program, we go over some of the announcements we’ve made during Cloudflare's Developer Week 2023. We had several AI-related announcements for developers, but also new products and features in our developer platform related to data storage, speed, and how we're building Cloudflare on Cloudflare. In general, we show where we’re headed in the future, powering and powered by AI. Cloudflare is providing essential infrastructure for leading generative artificial intelligence (AI) companies, with products such as R2 Storage. It’s an incredibly exciting time to be a developer.
At the end, we have in our "Around NET" short segment, the participation of Gift Egwuenu (based in Amsterdam), from our Developer Relations team.
Hello and welcome to This Week in Net, everyone. It's the May 19th, 2023 edition, and it's also our episode number 15 and a special edition related to one of our Innovation Weeks, Developer Week 2023.
I'm João Tomé, based in Lisbon, Portugal, and with me I have, as usual, our CTO, John Graham-Cumming.
Hello, John. How are you? Hello.
I'm also in Lisbon. Exactly. Two people in Lisbon, as usual. I have, actually, before we start, a fun fact, given that I know you like numbers, mathematics, and computing, given your experience.
So it's a number 15 fun fact, given that this is episode 15.
And in this case is that in binary code, the decimal number 15 is represented by 1111, so four ones.
And that is also the address of our resolver, our app, 126.96.36.199, which I found it was an interesting fun fact to put it out there, in a sense, before.
Well, it's a way to remind everybody of 188.8.131.52, so you've done that, so good job.
Exactly, exactly. Last week, we skipped this week in net, because it was a bit of a quiet week.
And I think we, Karma came back for us, right?
Because we've had developer week this week, and there are blog posts. Well, 32, I think, 31, 32, something like that.
I think it's something like that. And more coming on Monday.
So it's still a little bit ongoing. It's that thing of one more thing, in a sense.
Can you summarize a little bit what our developer week 2023 is all about?
I would say AI, but what would you say? I think also developer experience, probably.
So AI and DX, if we're going to use abbreviations for everything.
There's a lot of announcements around AI, about protecting your business if they're using AI, about building AI applications, about Cloudflare and AI, and all that kind of stuff.
We can talk about some of those. But then there's a bunch of other blog posts which are about developer experience or improvements to the workers platform, around D1, the database, around a really cool thing called smart placement, which is going to help run your code in the right place.
Upgrades to pages, upgrades to Wrangler, what supported upgrades to the runtime.
I mean, there's just a ton of stuff for developers.
And I think it's worth looking at.
There'll be a summary blog post out on Monday, which will list them all, everything we released this week.
And if people want to get a sense of everything, or you can go to the blog right now and start reading.
Exactly. Right now I'm showing, in this case, the blog, but also our page.
We have a specific page for developer week, Cloudflare.com slash developer dash week, with all of the updates and announcements.
We started the week before the proper innovation week, which was with blog posts related, in this case, to how we build the chat GPT plugin with Cloudflare Workers, right?
That was last Friday, right? So it was a week ago today, which was about, yeah, how to, so OpenAI released chat GPT plugins, and we have, we show you how to build one using Cloudflare Workers.
So if you have a way in which you want to plug in something to chat GPT, you can do it on Cloudflare workers.
We provide a, well, this is a whole how to, and we provide an open source chat GPT plugin, and you can then go off and build whatever you want to plug into chat GPT, which we have done.
We have some other blog posts about that. And we've got a chat GPT plugin for our documentation, which will help you write code using chat GPT, and also a chat GPT plugin for Cloudflare radar, where you can look at all the data that we publish and then query it through chat GPT.
So pretty exciting.
And this example actually contains a couple of really cool examples of how to build things.
So this is a, you know, what's popular on GitHub example, which is just plugging in.
It connects to GitHub to get that information, and we publish the code for that.
That's interesting for developers in general also, right?
Yeah. Although I really liked this one, which is, I've been having a lot of fun with the pirate weather API, which is a replacement for the dark sky API.
And I asked, should the author of this blog post, if he would write me a chat GPT plugin.
And I mean, it was done in like five minutes. And what's really cool about this is you see this example where he says, what's the weather like in Seattle, Washington?
Well, the fun part is the chat, the pirate weather API needs to know a latitude and longitude, not a name for place.
And so actually chat GPT managers to figure out that it needs to figure out the latitude and longitude of Seattle, Washington, and then go and plug that into the API.
It's pretty cool. Take a look at the code.
If you want to build a plugin for chat GPT, the code is there. Exactly. You can use it and make it work in a sense.
And you were giving the examples of our docs, for example, and it has a fun name, cursor.
Well, this is slightly different, right?
So cursor is not a chat GPT plugin. Cursor is something we built ourselves.
It's embedded in our docs, right? Yeah, this is an experimental AI assistant that we've trained to answer questions about our documentation.
And so this is very much an experimental preview.
But the idea here is that we're building and have built an AI assistant into Cloudflare, which will allow you to look at our documentation and get answers in a chat kind of fashion.
So it should be able to show you how to use our system and point you to the right places.
So please try it out.
That's something that we have, we've built ourselves. Exactly. This is a good way, a good thing to try it out also.
Again, it's also related to chat GPT and OpenAI in a sense, but in a external plugins type of way.
Well, it's our own thing.
But there are some other blog posts which are about specific chat GPT plugins. So query Cloudflare radar and our docs.
So you can use cursor, which is our assistant, or you can go to chat GPT and use a plugin.
And those plugins will give you access to radar.
So it's an example of finding data through Cloudflare radar or our documentation.
In fact, for the plugins, in this case, if you want the radar or documentation, you must install, in this case, you must have a chat GPT+.
Those are the ones now that are having access to plugins in a more generalized way.
And then you can install the app and just start asking the questions. There's a few examples here in terms of HTTP protocol.
How about US in the last six months, for example, it could be breakdown by country.
There's a bunch of things here.
And also, of course, the docs, which in this case is also interesting for those who are trying to build stuff and trying to go over the docs.
There's a few examples also here of what is possible, right?
Yep, absolutely. I mean, there's a lot you can do here with chat GPT or with our cursor assistant or write your own chat GPT plugin.
Exactly. More things, where should we go next in terms of updates and announcements?
Well, we should stay on the AI theme, don't you think? There's been a lot of AI.
So let's start with Zero Trust security for AI. So there's a lot of companies, which we know from statistics that about 50% of employees of companies have used an AI tool at work.
And of those, about two thirds have not told their boss.
And so the problem this creates for organizations, and I thought we just saw today, I read that Apple had banned the use of chat GPT and other things like that at Apple is the leaking of information, right?
So you're taking potentially proprietary information and plugging it into some external system.
And this has always been a problem, actually, for IT departments, which is people signing up for services that the company hasn't approved, because it's so easy to sign up for a SaaS service and use a credit card to pay for it or whatever.
Chat GPT just happens to be an example, which is, as someone was saying, I'd say the fastest product launch ever, right?
It's still not a year old, in a sense.
It's not even, yeah. Anyway, so we've rolled out a suite of tools that allow you to use our Zero Trust solutions for data loss prevention, for shadow IT.
And so it allows you to control who's using what in your organization, and get a handle on it and use our data loss prevention tools to prevent data being uploaded.
You may want your employees to use chat GPT, but you might not want them to upload certain types of data.
So this package allows you to do that.
So this is really about getting a handle on AI in general within an organization.
I think it's a good way also of not banning completely something which is relevant, can be really helpful for those who work at a company, but also take security into account and be prepared for that, in a sense.
Yeah. I mean, I think every company is now dealing with a shadow AI problem, right?
Where there's AI being used, and they need to manage it like they need to manage anything else.
And so we have a set of tools for that.
Exactly. And again, in a sense, it shows different products from Cloudflare, Zero Trust, workers.
So it's all connected in a sense, part of the ecosystem.
Where should we go next? Well, there's more AI, right? There's more AI.
There's a lot more AI. So let's talk about, keep going down. I think there's a post about R2 and MosaicML.
So that's actually kind of an interesting partnership.
So MosaicML allows you to essentially choose which AI training system you're using.
And what we're doing is we have an integration with Cloudflare R2.
And the reason this is significant, if you want to switch from one training infrastructure to another, that means moving your data around.
And because Cloudflare doesn't charge egress fees for R2, and because the training data and the models and checkpoints tend to be very large, egress can be a real cost for training.
And so this allows you then to pick and choose between the different providers that might be out there without being stung on pricing, on getting access to the data.
And there's a lovely little graph in here where, if you scroll down, where they actually do a test where they're going from one provider to another.
So I think they start out using Oracle, they switch to AWS and switch to Google Cloud, running a training job across three different providers, but zero egress fees with R2.
You can do that dead easy.
Exactly. And with more data needs, because of all of this we've discussed before, of what these large language models usually need, these egress fees will even have a higher toll or a bigger toll in terms of prices and what you spend on working on all of these things, right?
Well, I mean, the thing is that with any of the training that's going on, you have to inherently get the data out.
It's not like, you know, often people use these object stores to put things that maybe get accessed again, but a lot of that data is staying at the restaurant.
Fundamentally with training, checkpoints, models, all that data is getting accessed.
And so you really can't get stung on egress when you're trying to move stuff around.
The lock-in those platforms have is ridiculous for this example. It's ridiculous in general, but I think it really holds back AI if people are locked into a particular platform.
So hopefully things like this will enable people to store their models and use them wherever they want.
True, true. And we actually have a press release related to that in a sense that also goes over the fact that we are providing, because of our position, a lot of infrastructure for AI companies in a sense.
Yes, yes, it's absolutely the case. There's a very large number of AI companies who are using Cloudflare's infrastructure for storage, for compute, and also frankly to protect them.
And there's a blog post actually about protecting generative AI applications.
So we had a blog about going off and protecting your business when your users are using AI, but there's another blog post from the last couple of days, which is about how do you protect a generative AI application if you're building one?
There it is, how to secure generative AI is on the right there.
And the reason generative AI applications are a little bit tricky to secure is that unlike something like an API, which has a defined schema, I mean, fundamentally people are typing in or pasting in absolutely anything, right?
And that makes it very challenging to control what's getting input and what might be threats.
You're seeing tremendous amounts of load because of the fact that everybody has got interest in these LLMs.
You're seeing people who are, essentially what they're trying to do is make a business off of someone else's business right there.
They're using one of these chat systems and are reselling it essentially without having a business relationship.
So there's a lot of things that generative AI companies need to do to control and protect their applications.
And this blog post gives you a bunch of ideas about how to do that using our tool suite.
There's a lot of things that these types of models now will put companies taking into account what they need to do to be secure.
And things are moving so fast, right?
That you need really to be prepared for success because being prepared for success is being prepared for more traffic.
All of the problems that may occur if you have more traffic, more clients.
So being prepared for that will definitely make or break in some cases, right?
Very, very important to get that sort of protection in place and we can definitely help with that.
There's a lot of advices here in terms of steps. Those who are really interested in this area, I advise you to go and see all of that is here.
Absolutely. And there's this comparison that a lot of people have made in terms of the first iPhone.
What that sparked, what that brings in terms of innovation and changing how we view and do commerce in the Internet.
And there's a lot of hope that this will do the same thing in terms of changing things.
But again, being prepared will definitely be important.
Yeah, absolutely. And then we should go back and we should really talk about constellation, right?
So we've talked about people protecting their regenerative AI applications.
We've talked about companies protecting their employees.
We've talked about using Cloudflare's own AI, things like cursor and chat GPT.
Constellation. Now constellation is an enhancement to our developer suite, which allows you to run AI models and machine learning models on our network.
And so this allows somebody to go in and take a model and incorporate it into an application they're building.
And so this is a realization of quite a long dream for Cloudflare, which was to make machine learning and AI available to our customers directly.
And so if you're building something on Cloudflare Workers, you can now run all sorts of applications, the machine learning and AI models on our network as part of your application.
You can download a pre-trained model from Hugging Face or Onyx Zoo or something like that, or you can train something yourself.
And we've got a bunch of examples in here of how you do that running on our network.
So this is a big announcement around AI on Cloudflare, as well as all of the other AI announcements we made this week.
There's a few examples here in terms of even code.
A lot of things to explore in all of these changing times.
Absolutely. And it all interoperates with workers. You can just build in workers and then you can call out to some inference job and there you go.
So this is in private beta now.
If people want access to it, click the button, we'll get you on the list.
But this is a very, very exciting piece of work. And most of these things, actually, there's some conversation because we are improving.
There's a lot of discussion.
So we also have, for example, this mention that is frequent this week in blog posts, that we have our developers Discord that people can go there and share their experiences, which I think is important given that this is a changing area.
So feedback is important, right? Yeah. I mean, we use Discord a lot with our developers.
So if you are a developer and you want to communicate with us directly, the Discord is kind of real time.
There's a community forum as well.
The team is very, very happy to chat with you. You can see it if you go to discord.Cloudflare.com directly.
So yeah, there's that. More things. Well, I think we've covered most of the AI stuff this week.
So let's talk about smart placement, because smart placement is one of those really neat ideas.
So some of you may have read a thing I wrote in the last developer week, which was about this idea of a super cloud.
And the idea of the super cloud was that you would have some globe spanning network, like Cloudflare, and you put your data on it, you put your code on it.
And the network is smart enough to figure out where everything should be.
So it would move code around to be near the end user or near data or move data around to be in the right place with the goal of optimizing performance.
And performance for us is latency.
Obviously, internally, we care about CPU utilization and disk space and all that stuff, because we need to optimize our business.
But you shouldn't.
What you should care about is how fast does your application run? And there's a simple measurement of that, which is the latency for an API call or a request or whatever.
And so that's the idea of the super cloud. And the vision was that you would not worry about regions.
You wouldn't even know where your code was running.
We would measure it and figure out where it needed to be. Well, smart placement is our building block in that, which is that we will move code around to get the best latency for the end user by making a decision about where it should be.
So should the code be near the end user? That's one latency optimization. And that's what people call the edge, right?
These are edge networks and edge networks are near the end user.
But there's another whole optimization, which is maybe your code talks to backend systems, maybe SaaS services on the Internet.
And it will be better if your code was near those services, you'd actually get a better experience for the end user.
And so the super cloud comes to life with smart placement, where we are measuring the actual performance of requests as seen by your end users and figuring out where your code ought to be.
And so this is a really, really, really smart piece of work.
We've done a lot of optimization work over time about how we optimize the network side of things, which is we have a thing called Argo Smart Routing, where we measure the forms of the Internet constantly.
And we go around the Internet and we know exactly where the latency problems are, packet loss and stuff like that.
So we will reroute traffic to be the fastest across the Internet.
And now with smart placement, we will move code to the right place so that the end user gets the experience.
And the other thing is, of course, Cloudflare has data centers all over the world, 295 cities.
And so we can make decisions in a different way, depending on the city, right?
It might make sense for it to move for users here, but we can have another copy of your code running somewhere else.
And so this is a big building block of the super cloud.
There's this smart placement demo I really enjoy that you can select your backend.
It's not only helpful, I think, but also fun to be honest, which is interesting.
You can actually see by yourself what we're discussing, what the block is discussing.
So how it works in a sense, which I find. Yeah, exactly. It's like we're here in Lisbon and like, where should the code be if the backend for the system is, you know, in Asia, for example, or is on the US East coast, and we should probably move the code around.
Exactly. Less travel in a sense. So there's also what's next for smart placement in this case.
And again, our Discord channel is always open for feedback.
Yeah, just like more and more optimization of how we make the optimization decision about where stuff goes.
So working out what the optimal placement is.
A lot of interesting things. Where do you think we should go next?
I'd like to go to goodbye section 2 .8. It's right there. That is a popular blog post, right?
Yeah. And the reason for this is if you go right back to the beginning of Cloudflare, before even I was there, Cloudflare was operating a CDN.
And the CDN was intended to be by websites and websites, right?
Some mixture of HTML and CSS and images and stuff like that.
But it wasn't intended to be used as a file locker or, you know, building the next YouTube or delivering a whole bunch of video files, which are very, all of these things tend to end up being very high bandwidth.
So in our self-service subscription agreement, there was the section 2.8, which said, if you deliver a disproportionate amount of stuff that isn't HTML, we may terminate service.
And occasionally we did it. And two problems occurred.
One is we mistakenly terminated somebody fairly recently under this provision when they weren't really doing that.
There's a blog post about that, right?
There is a blog post about that, which I believe is linked from this one. And the point being that our service is much bigger than a CDN with security features.
Now, there is all sorts of things. There's R2, there's D1, there's all sorts of storage functionality, there's pages.
And this section 2.8 shouldn't apply to those other services.
It should really apply to the old style of you're hosting something yourself.
You put Cloudflare in front of it to be the CDN with some security services.
And that's just fine by us. But if you start using a disproportionate amount of, you know, streaming video or a huge number of images, you know, if you try to build the next imager on us or your like objects you're delivering, then you need different terms of service and you need to use one of our paid services for that.
So this blog post is about how we rewrote our terms of service and broke out the terms into different areas.
Because the big problem was that the self-service subscription agreement, which was the overriding thing everybody had to sign up for, included section 2.8, even if you were using workers or stream or images and stuff like that.
So we've actually broken it out so it's really, really clear, you know, what's allowed.
So as long as you're using one of those paid things, deliver video through us if you're using stream, deliver a large number of images if you're using images, deliver big objects if you're using R2.
But if you're using us for just a website, that's great. But there's a point in which if what you built was, let's say, an anonymous file locker on Cloudflare, then that would violate section 2.8 if it's on the CDN, unless you used us as the storage for that.
Exactly, there's a bunch of specific examples in terms of customer A, B, C, in terms of what different customers may need because of the different services we have that is represented here, I think is helpful also as a guide to understand the changes.
And the actual terms of service, right, if you go through to Cloudflare terms of service, you can see, okay, I'm using this, this is what applies to me, these are what we call the service specific terms.
So there's a new page, you can look at what you're using and you can figure that out.
The other thing that's buried in here is way back in the beginning, we were worried about somebody benchmarking our service, right, like 12, 13 years ago, and that no benchmarking rule was in there.
That's gone as well. We're happy for people to benchmark our service.
Back in the day, that was something that scared us because we thought people would do so much benchmarking, they would cause us problems running our network.
Well, that's not true anymore. Exactly, things change, new services, new things.
Actually, you had a sentence from the super cloud blog post from last year's Developers Week, which is the Internet was not prepared for what it became.
But in a sense, a company also involves new services, the Internet evolves.
So there's changes being made and need to be made for the terms, right? Exactly.
Where should we go next? We still have a few more minutes. We also have a blog post related to building Cloudflare on Cloudflare.
Yes, that's a really good one.
So building Cloudflare on Cloudflare. So this is by someone who manages a team of people who work on one of the core components of Cloudflare, I think called FL Frontline, which is currently based on Nginx and Lua.
And over time, we've been placing various things in Cloudflare with other things, Oxy, Pingora.
I worked on the WAF, which was a separate thing.
There's all these different things being written in Rust and other languages.
But FL had stayed around for a really long time and it had become a bit of a scary, big thing.
And we want to build on the stuff that our customers use.
So this long blog post is about how we're moving Frontline, which is the core thing that handles all HTTP traffic, Cloudflare, onto workers.
So we're actually going to build on the same platform that our users build on.
We obviously do that for lots of products today. Tons of our products are built on workers, but this core component had hung around for 12 or 13 years.
Now it's time to move it onto workers.
And so we're going to do that. And actually, it's been a really interesting journey.
And the blog post talks about that and talks about how doing so, we ended up improving workers because there were things we were missing or problems we had with it.
And yet another good reason to eat your own dog food and actually use your own tools.
So we're on this long process of getting rid of a very old component of Cloudflare.
I've been at Cloudflare long enough to remember when this was actually the new thing, because originally we used PHP, not Lua with Nginx.
And now we're actually going to get rid of the Lua and we're going to get rid of Nginx as well for this particular use.
And then I believe that is almost the last use of Nginx at Cloudflare.
And we'll be on our own platform almost complete.
There's one other Nginx instance that needs to disappear and then we'll be finished.
Nginx has been tremendous, but we have outgrown it.
Our complexity has outgrown it. And it's time to move this last component over.
Exactly. And in a sense is like we're building it on something that it was created in the company 2017, right?
Workers. So it's recent, but it's already being so crucial for the company, in spite of the developers that are also built on workers, right?
Exactly. Exactly. Before we go, just today, this Friday, there was also this blog post about D1, right?
Turning down to 11. Well, so this is a really cool blog post about all the work we've done on D1.
So D1 is our serverless SQLite based database that we replicate across our network.
And we launched it, we launched it into alpha last November.
And since then, there's been a tremendous amount of work done to make it faster.
And the reality is that it was not performing as well as we wanted it to.
And so actually the workers team have gone off and written their own code using SQLite.
And they're actually now using an underlying durable object.
So there's massive change and it's incredibly fast. It gets a bunch of extra features.
And I'd urge everybody to try it out now because D1 is just blazing fast.
And there'll be a companion blog post to this, which talks in detail about how we did the work that makes this happen.
But it's a really fascinating journey to take something and make it scale and make it automatic at Cloudflare scale.
And stay tuned for the companion, read this blog post, try out D1, and you'll see it's really fast.
So just to mention, there's a lot more announcements and blog posts this week that you can browse through our main page discussed in the beginning.
Also Cloudflare TV segments, even past developer weeks, if you want to go over that.
Just a reminder of that. Before we go, John, what do you think in terms of the full week?
AI announcements, a lot. Data announcements, how Cloudflare builds on Cloudflare announcements, in a sense, is an exciting time to be a developer, right?
Well, yeah. And we didn't even talk about all the stuff that's been done on the core platform, right?
Which is the improvements to Wrangler, the fact that we're going to merge workers and pages together, all of the work that's been done around the node compatibility with the additional stuff.
I mean, there's a ton of stuff. So whatever you're doing on Cloudflare Workers, it's worth checking in with developer week because there's so much happening.
Exactly. And again, exciting time to build stuff, to be in this area, for sure.
Yeah. Yeah, absolutely. Whether it's AI or other applications, whatever you're building, come build it on us and we'll move it to the right place in the world to make it fast and we'll make it scale for you.
Just before we go, would you like to be a developer back in the day with all of these tools, in a sense?
Well, the problem is back in the day, for me, it was like 4K of memory and a 40 by 25 screen.
So it's unimaginable, all this stuff that exists today.
What I think is you can get up and running with something that scales to the Internet really fast.
I think that's really exciting. Thank you so much, John. And see you next week.
Yeah. See you next week. Bye. That's a wrap. Great. It was great.
Thank you, John. See you next week. Yeah. See you. Bye. That's a wrap. Before we go, it's time for our Around the Net short segment.
This week, we're going to travel to Amsterdam in the Netherlands.
Here's Gift Oguenou from our Developer Advocates team.
Hi, I'm Gift Oguenou.
I work as a Developer Advocate on the Workers Developer Community team here at Cloudflare.
I was born and raised in Lagos, Nigeria, and now I live in Amsterdam, the Netherlands.
And I think this area is important because building monolithic applications could quickly become a strain, especially on large enterprise-scale applications.
That's why building micro-frontends on the edge is a great way to solve this problem, especially because it improves performance and scalability for the team utilizing micro-frontends in their applications.
Recently, I've been watching a show called Ted Lasso.
I really love this show because of all the characters in it. It blends well, and it has a lot of humor in it.
And right now, I just love it. You should check it out if you haven't.
One thing that I love to do when I'm working is listening to lo-fi music.
I think it's because it has so much calm to it, and it helps me focus while I'm doing deep work.