Cloudflare TV

💻 What Launched Today - Thursday, April 4

Presented by Adam Murray, Brendan Irvine-Broque, Jacob Bednarz, Tanushree Sharma, Taylor Smith
Originally aired on 

Welcome to Cloudflare Developer Week 2024!

Cloudflare Developer Week April 1-5, our week-long series of new product announcements and events dedicated to enhancing the developer experience to fuel productivity!

Tune in all week for more news, announcements, and thought-provoking discussions!

Read the blog post:

Visit the Developer Week Hub for every announcement and CFTV episode — check back all week for more!

English
Developer Week

Transcript (Beta)

Hey everyone, welcome to day four of Developer Week. We are kicking it off and the fun does not stop here.

We have a lot of great announcements that came out today from across our developer platform that we're going to chat through, that we're really excited to talk to you about.

But before we dive into things, let's do a quick round of intros from folks on this call.

I can kick things off. Hey everyone, my name is Tanushree.

I'm a product manager on the workers team and really excited to talk to you about some of the new features that my team launched today.

I'll kick it off to Taylor to go next.

Hey there, my name is Taylor Smith. I'm the product manager for Stream in Cloudflare Media Platforms.

We've got a bunch of announcements today that I'm excited to share with you.

Adam, you want to go?

Yeah, hey, I'm Adam Murray. I'm a senior product manager on the workers team, specifically focused on testing and automation and authoring.

Thanks.

Brendan, I lead part of our workers product team here at Cloudflare. And then Jacob, you want to go next?

Yeah, so I'm Jacob, the odd one out here. I'm one of the engineers on the control plane side, looks after the API and the SDKs.

Awesome.

Yeah, I'm thinking let's have Taylor kick it off with some of the new announcements that came out around our media platform.

I hear there's some good things around calls, stream, images.

Taylor, tell us a little bit about some of the things that you've been up to.

Sounds good, thanks. We have so many things coming out of media platforms today, I had to write them down.

Let's see. So today, Cloudflare Calls went into open beta.

So we announced this as a closed beta last year, it's open beta now.

Cloudflare Calls is our WebRTC real-time calling. And as part of our release of the open beta for calls, we've also open sourced OrangeMeets, which is an internal tool that we've been using for a lot of our company meetings for meetings across many departments, but specifically, almost all meetings and media platforms for at least a better part of a year have been on OrangeMeets.

So that's also open source. So if you check out the blog post for Cloudflare Calls, you can see information about WebRTC and OrangeMeets.

And then for stream and images side, for stream, my team has launched live instant clipping for live broadcasts.

We previously had not had the ability to make clips of that. But one of the things that we know is that in long live broadcasts, really what happens are exciting moments that end users want to be able to share with each other, both within the application that some of our customers have developed, and also on other channels.

So live instant clipping allows viewers, so no authenticated API calls needed for this, end viewers are able to request a preview manifest of the most recent playback of video, and then make a specific clip of some seconds of that video that they want to share.

And that'll either be produced as an HLS manifest that can be added into the application player that our customers have built, or download it as an MP4 so they can share it off on another channel.

And it's also been a lot of fun to work with the engineering team really closely on live clipping.

It's a, it sounds like a very simple problem, but it got really technically complex to be able to meet the needs of re-encoding and packaging that video very quickly.

Because with live clipping, like when the moment passes, it's over.

So being able to deliver it very quickly is what makes that feature really important.

And that's also what made it really hard, but also fun to build.

Let's see, number three, images team has introduced the media manager for easy uploads.

One of the things that we know for Cloudflare images is we take in a lot of data from our customers and users, and building the infrastructure to be able to pull in direct creator uploads, get those authenticated, and get them uploaded from users directly to Cloudflare images involved a lot of steps that our customers were having to go build themselves over and over again.

And we saw a really great opportunity to provide both the sample worker and a front-end UI to be able to make that happen very seamlessly and very easily.

So there's a drag and drop, easy framework to be able to add to your application or an embed code that's just an iframe to be able to let users drag and drop into the media manager and automatically get uploaded, authenticated into our customers' accounts.

So this is like your front-end dev, and you can just like really easily jump in and say, you don't have to deal with like upload protocols, multi-part uploads, you just drop in a little JavaScript into your app, right?

Yes. And, you know, adding more support for that.

And, you know, in the long term, what we want to do is add more sources to be able to pull from different places, even if it's not on someone's computer, and also for different outputs.

I'm looking forward to working very closely with the images team to figure out how we can also make that upload widget work for stream.

So that's not part of it yet, but that's what I've got my hopes on. But it's a really great win for the images team today for some huge inbound migrations.

And then last up for the images team also, we've got face cropping. So using machine learning models, specifically retina face, and the workers AI integration for that, being able to use faces as a weighted center to be able to automatically crop images so that you can take an image of somebody, and when you are making dynamic resizing variants of that, being able to make sure that no matter how you have to crop it or resize it, that the subject of the photo stays centered for those other previews.

That's very cool. Also, you're using workers AI, and we love to dogfoot at Cloudflare, but it's really, really cool to hear that.

Absolutely.

Cloudflare stream and images both are, you know, case studies in Cloudflare builds Cloudflare on Cloudflare.

And it's a lot of fun to be able to use all of these tools as customer zero.

I'm curious with Cloudflare calls. We use it a ton internally for our meetings.

I really love the simplicity of it.

You just go to a URL. You don't have to sign in anywhere. It's so easy to use. Don't need a client installed.

Are you able to share any customer use cases or interesting ways that you're hearing users using calls?

So we've got a few customers that have been using calls or our WebRTC support, which is the thing that underpins the calls platform.

In some of their, like, we've got a couple customers using it for AI generated video.

We've got some customers that have built, I'm thinking specifically of one customer that is in the project management business, and they wanted to be able to make sort of an easy huddle within their project tracker.

So they've been experimenting with that.

But we've had some really close relationships with some early beta customers, for sure.

Got it. Yeah. Super nice to get internal feedback, but also tech relationship with people trying things out.

That's great to hear.

Cool.

I want to turn it over to Adam. Let's hear about some of the new things that have come out with Pages.

I think there's new things, there's long-awaited things, things we've been teasing for a while.

So Adam, can you tell us a little bit about that?

Yeah. So we have some massive Pages announcement. If you've used Pages before, it's already a really great product.

It helps you get from zero to deploy super fast and helps you, kind of, you can use all of the Cloudflare developer platform on it to create your applications.

It's got CICD, it's got preview deploys, lots of great functionality.

There have been a few longstanding requests, and we wanted to address those.

So the first one of those would be monorepo support.

Lots of users use TurboRepo, NX, Lerna, these kind of monorepos with their projects.

And in the past, you've been limited to kind of one Git repo per Pages project.

And that is okay, but we want to encourage micro front -ends to split up your applications and different things.

And that's just kind of the way of modern application development.

So what's really cool is with this monorepo support now, you can actually use a bunch of different projects in these monorepos and just exclude the build branches or exclude through the build config the things you don't want to get rebuilt.

So you're not rebuilding all that.

I mean, you can rebuild all the projects on every change if you want to, but you can also exclude, you know, from one directory, you know, you say, anytime it gets changed, this branch gets changed, you only rebuild this specific project.

And so therefore, we're able to support monorepos now, which has been a longstanding request.

And it was going to help a lot of users that leverage monorepos for their development workflows.

Yeah, I think something we're hearing across the platform, it's not just what you can deploy, but it's also how you do it.

And people have had to in the past change how they're deploying to Cloudflare to be able to basically manage what we support.

But now we're kind of meeting more people where they're at.

So that's great to hear. Yeah, that's huge. Like that, meeting people where they're at is really the biggest thing, because we definitely want to reduce the friction of getting on the platform and the friction of continuing development.

So this is just another way, like you said, we're meeting people where they're at.

Another way towards that is Wrangler Toml support for pages.

So if you've ever used workers before, you know that you can use Wrangler Toml to configure your pages or your workers project.

And it's really powerful, you can leverage it for bindings, you can leverage it for environment variables, you can leverage it for all sorts of other configuration.

And what was really great, we kept hearing, you know, we'd love to get that support in pages somehow.

And so we were able to actually get that in for full stack pages projects.

So if you're using pages with a web framework that does full stack development, or you're using pages functions, you're now able to leverage Wrangler Toml in those projects for file based configuration, which is great, because it means you don't have to leave your IDE, you don't have to leave your CLI, you can literally create this file and configure it local, you can configure your preview deployments, you can configure your production environment, all those things right from Wrangler Toml.

And you can play around with it, change your config, you know, enable this, you know, change your binding here to test this D1 database and this KV namespace and whatever you want to, it's a very flexible setup.

And there's a lot there, actually, it's a pretty large feature.

But I'm really excited to get that in the hands of our users.

Yeah, is there anything? All our bindings, all the Wrangler config supported?

Or is there anything that we scoped down on? Yeah, so there definitely are some differences.

It doesn't quite work the exactly the same way. All the pages bindings that were previously supported, those are supported now, as well.

So that's, that's one thing that we we kept the pages, bindings, don't think there might be some other differences in the configuration.

It's very, they're very subtle.

So the biggest thing is, you've got to actually specify your build output directory in your in your Wrangler Toml, because that's how pages kind of knows what's what's your, you know, your build output, that that is what you're actually pushing, pushing up and deploying.

And so you specify the build output directory, you specify the other configuration fields, and then pages, it knows, okay, Wrangler knows, oh, this is a pages project with a valid Wrangler Toml.

And I want to develop against it.

I'm trying to think there's, yeah, it's pretty close. The environment configuration is a little bit different as well.

In the past, if you've used Wrangler environments, Wrangler environments allow like named environments, they allow a little bit more flexibility in that respect.

With this specific Wrangler Toml file, we're limiting it to local preview and production.

And we're really flexible. So we let you like if you have just top level configuration, you can use that for local, if you deploy it to your preview, it'll take that that'll become your preview configuration.

If you deploy it to your production, they'll become your production configuration.

But we also allow you to override production and preview, which is really nice.

Because again, we're trying to give users the tools they need to meet them where they're at.

And this allows super a lot of flexibility in the way that you actually configure your project.

So you could have the same preview and local configuration and just override the production configuration.

You can have the same across all of your project. Or you could just specify local and production is the same and preview if you want.

However you want to do it.

We just want to give users flexibility. Got it. Yeah, that's great that we let users pick.

And we have opinionated workflows, but also you're able to override that and configure based on what works best for your development workflow.

Yeah, another really great thing about this too, is that now it's it allows a little bit better access control.

Because also, you don't have to give users access to your dashboard to make these changes anymore.

You can literally just you know, if a user has access to your git repo, they have access to make these changes in a file based system in your source control, then you've locked it down.

And you don't have to literally give them access to say your whole account, you know, or anything that you wouldn't want to give them access to.

Got it. That makes a lot of sense. I'm curious with bringing the Wrangler toml to pages, what was the hardest part of this project?

I think from an outside perspective, it's sort of like, we have Wrangler toml for workers should just work for pages.

But I know there's a lot of things that you've been thinking through and that you've invested a lot of time in.

You share some of like the inner workings there?

Yeah, so we did a lot of research at the beginning of this, because like you said, when you have an existing, you know, the first thing was, do we just use Wrangler toml?

Or do we create a new configuration file in a whole new format? And we didn't want to do that.

We wanted to stick with something users were already familiar with, we thought there's a lot of value in that.

But when you stick with some users already familiar with, you also have to be careful about the things you change the things you don't change.

And so that's where we kind of really wanted to dive into user expectations, both new users that have never used workers before and users that have have used workers before and already have some kind of mental model and concept.

And so as we did that, we obviously we started around this idea of simplicity, we wanted to give a very simple, elegant solution that users could just dive right into and get to work with.

We also are constantly thinking about convergence.

I know we talked about that multiple blog posts of how ultimately, we'd like to see workers and pages come together more.

And so you're seeing us bring more functionality that's been existing in workers to pages.

And so in that vein, too, we're kind of thinking, how do we how do we get the best of both worlds, and kind of start moving towards that configuration, or sorry, that converged world.

One of the really cool things we did to to help users is we actually create a migration command.

So you can look at the documentation that we put out today, and you can see a brand new Wrangler command, they'll actually let you download your config.

So if you've been configuring your pages projects via the dashboard already, and you've got, you know, maybe it's a complicated setup, you have, you can just pull that down directly in a new Wrangler tomo file that we give to you out of the box.

And so it helps, it should help you migrate really easily to this new file based config.

Oh, that's great. Yeah, we were, that was one of the things I kind of realized early on, I thought, if we don't provide users the migration path, it's gonna be really annoying to have to go in and like, look at all your configuration.

And so I thought, how do we help users just run a command, get a file and push it up?

Good to go. Yeah, it's always the little things with DX that end up tripping people up.

So when we think ahead and can kind of get them to a point where it's so easy to do.

That's awesome to hear. Cool. Anything else around around pages to share?

Yes, absolutely. So we've had workers integrations for a while.

Well, good news, there is now pages, there are now pages integrations.

So in the same way that you could take many third party database providers, you know, including our own D1, you can just one click, click that integrated into your project, this environment variables get populated for you, secrets get populated for you.

So if you like that kind of no code, just click it and forget it kind of set up.

We provide that now through pages, which is fantastic.

And again, you see the thread through all of this is pages we're wanting to, you know, we've done the spa things and all the spa treatment.

But we've done the single page thing.

What we're doing now with pages is we're moving towards this world of where full stack pages projects like full stack application development is possible.

And it's not just possible, it's actually easy and enjoyable.

And it has a good developer experience. So those kinds of full stack, that focus of full stack, you're going to see that more and more throughout the pages framework and pages product.

Yeah, yeah. And all these improvements always set us up towards being able to have a converged world where users don't have to pick between workers and pages, you just start building, and then we have all the options available for you.

So that's huge progress to see. Cool. Let's kick it off to Jacob.

So I know that there were some announcements around production safety released today.

And you have a huge, huge project across Cloudflare to build the Cloudflare SDK.

Jacob would love to hear a little bit about kind of how this idea came about.

And also, you were just telling us before the call about how you really champion this internally.

Tell us a little bit about how this kicked off. Yeah, so this has been something that's been in the works for a long time.

And something I didn't mention previously was that like, when I was a customer prior to joining Cloudflare, this was one of the real sticking points for us to the point where we got in touch with the PM of the API team at the time.

They were like, hey, is this something you're interested in if we start contributing back to?

And all good things come about.

Ended up joining Cloudflare with the focus on the API and the SDK side of things.

And sort of brought a lot of that knowledge and a lot of that history that comes along with it.

Up until, I would say, about 12 months ago, maintaining the libraries was a very, very challenging process.

Not just for internally as a Cloudflare employee, but as someone who was using various languages.

And the reason for that was that along the way, there was a lot of context and opinionated baggage that had sort of come along.

So in Go, it looks like this.

But then in Python, it looks like something different. And all those language nuances are the things that make it great from a developer experience, but make it horrible to maintain internally.

And it's kind of funny, when you look at the internal teams and what libraries they've contributed to, you can tell which ones they're very comfortable and familiar with because you'll have some that'll be like, oh, we really want to go big with Cloudflare Go or Cloudflare PHP or something like that.

And that's just where their hearts at. So we sort of started this project, I would say, about two years ago.

Internally, we used to use JSON Hyperschema.

And it's just a way of describing your endpoints and services in a way that you can plug into documentation and other tools to do that sort of thing with.

JSON Hyperschema is great. It's kind of lacking in a lot of spots. So we sort of said, hey, this isn't working for us anymore.

We had a lot of custom extensions, a lot of custom annotations to get basically the output that we wanted and to be able to use them in the way that we wanted.

So we wanted this big migration to get over to Open API.

And that in itself was a massive, massive task because there's no one-to-one comparison with them.

And at the same time, we kind of had to find a way of taking what we depended on and relied on for our external services and bring it over to Open API.

So yeah, we did that migration, which was really, really cool.

We ended up getting the new API Docs side out with it. And that was like, it was a huge thing because we built these foundations where we can now say, hey, we have described these services in such a way that we can put them in other places.

And a lot of people sort of think about Open API and they're like, oh, it's really great.

You can put it into like Postman and do some requests or like you can view what the documentation is.

And that's a lot of the use case. But where we're starting to head with this is like, we want it to generate everything that we start interacting with.

So the first step in that was the SDKs. And initially it was kind of like, oh, this is going to be like a fairly straightforward process.

We had a bit of an idea on what we were going to do, put together like some language design documents and we're like, hey, in Go, it's going to look like this.

In PHP, it's going to look like this and so on and so forth.

And then you start getting into like those language nuances, like pointers in Go and like PHP not caring what certain types are in certain versions and all those good little things.

So you start walking down that very, very treacherous route of trying to make things look nice and usable, but at the same time, be consistent.

So did all that sort of thing, put together these design documents and then try to work out like, are we going to generate these in-house?

Are we going to speak to someone else about doing this? And we kept hitting problems with the size of our schemas.

And to put things into perspective, Cloudflare at the moment has about 1300 endpoints.

That's not like methods.

That's not like the parameters that you can supply to them. That's just endpoints.

So like slash account, slash zones, that's two. There's 1,298 others that we have documented and plenty more that we don't.

So yeah, we hit that on early and it was like, well, maybe we're going to have to build this ourselves and go down that route.

Luckily, we ended up finding a vendor, Stainless API, who have solved a lot of these problems.

So we were able to just focus on getting that schema really high quality and really specific for what we wanted to use, which was like a huge win for us.

And the good thing about it is we've been able to launch these SDKs and have them feel really nice.

You don't know what tool generated them. There's obviously like little annotations in the files and that sort of thing about going to auto gen and whatever else.

But yeah, a lot of the off the shelf tools are like, oh, you can tell this was generated by Java because it's Java flavored Ruby and all that sort of thing, which no one actually wants to work with.

So yeah. And the thing about the SDKs now is it's just building on those foundations that we already have.

And going forward, we're doing the same thing for our Terraform provider.

We're investing in it so that when the access team or the pages team add one new attribute in the API, they don't need to make six different GitHub repository PRs to have this functionality available to everyone.

At the moment, I'm waking up and there's like 10 to 15 updates in these API schemas that just automatically go out.

That's new attributes, that's new descriptions. And there's even been a couple of new products that have just snuck in and you don't even really realize.

We have a little bit of work to do with the mappings and some of the namings and that sort of thing.

But this is really starting to pave the way for us to, as I think Adam mentioned earlier, really make customers at their entry point.

So if you're using Ruby internally, you don't care if we have a Go SDK.

You need a Ruby SDK to integrate with your application. So that's a huge, huge thing for us.

And yeah, as I said, there's just more cases where we're going to start expanding on these schemas and the integrations that we can build from it.

Yeah, that's really awesome to hear.

This is so powerful and a huge initiative across Cloudflare.

So kudos to you, Jacob. Solves a lot of problems from just even just enforcing open API schemas within Cloudflare, which we know has been a challenge to every team trying to get Terraform support in before a product goes GA.

So even like, Adam, we've been talking a little bit about using the Cloudflare SDK in workers as well.

So lots of things that can be built on top of it. So super powerful to hear.

I want to turn it over to Brendan. Tell us about rate limiting in workers.

I think this is one of those features where we hear on the Discord and we hear on Twitter, rate limiting when?

And it's finally come out. So we'd love to hear a little bit about it.

Yeah, so it's great to have Jacob on this call because Jacob has been in my ear about rate limiting for a year now, because riffing on that Cloudflare API example, we have this huge API and we need to set different rate limits for different endpoints ourselves and for different customers.

And it becomes a little bit tricky or in some cases, not even possible to configure it all up front via some rules.

You kind of need to access rate limits in your own code.

Maybe you need to rate limit something only after you've gotten to a certain point or you need to rate limit, set a rate limit on a kind of per customer, per endpoint or other per resource way.

And so what we did was kind of took the existing rate limiting rules system and infrastructure that's built into Cloudflare's web application firewall.

And we made a way to add a binding to your worker that lets you communicate with that and enforce rate limits from the code within your worker.

And so it's pretty simple. You kind of add a binding to your worker, you add three or four lines of code, you can add multiple different rate limits to your worker and kind of mix and match and combine different pieces.

But it really is what is going to let us, even with our own API, we've kind of built this for ourselves and now we're bringing it to our own, to customers externally.

Yeah, it was definitely a huge thing to give a little bit of insight to our internal rate limiting story.

It's not just like a one thing in one place. We have layered rate limiting, some of it's contextual, some of it's not.

In some of it, you need to know what the requester credentials are or you need to know something from another data source.

And even though we've been able to do that in our core infrastructure for many, many years, we've been able to do that moving to a part of the control plane where we're moving it closer to our edge.

We needed to take that context with it and there wasn't a great way of doing it.

And yeah, very, very excited to finally get our hands on this and be able to throw all of our traffic at it and the contextual rules that we have today.

That's really cool. We have a few more minutes and then there was one more along this line of production safety and confidence.

There was one more announcement I think we had towards that snushoot, right?

Yeah, yeah. So one big thing we announced around this is gradual deployments for workers.

So one of the problems that some of our customers are facing, especially when you have production applications that handle a lot of traffic, is that workers is very fast.

You click deploy and your code is propagated everywhere within a matter of milliseconds.

But when you have an application that has a ton of traffic, you can't always catch bugs or edge cases locally.

So you want to be able to gradually deploy out your code, maybe let's say to 1% or 0.1% of your traffic before going out fully.

And we have a lot of customers, both internal teams that use workers and external customers that were sort of building around this problem.

So they were maybe using a worker in front of their worker that acted as a gateway and was able to basically do that split of traffic.

But we have a lot of customers that are building on workers and we always want to offer native tools to them.

And so we built gradual deployments for both workers and durable objects.

They're super simple. Today they work on a percentage basis.

So you give us two versions of a worker, tell us the percentages that you'd like for them, and then you're able to kind of roll out changes accordingly.

And we also support showing you which versions are actually running in metrics as well.

So if you're you know, you've just made a release, you want to be able to see that release, want to be able to see the status codes that are coming out of your worker.

In real-time logs, you have the to do that filter by worker version.

All of our analytics and logging also includes a worker version.

And on the dashboard, we show you, it's really cool.

Brendan was just sharing a screenshot earlier, but we show you every time you deploy a new version of your worker, you can actually see the traffic shifting from one to the other.

So it's a really, really great way to get a sense of like, hey, what is my worker doing?

Is this acting as expected? If there's something wrong, I can roll back those changes.

So some powerful tools that we've launched.

We know there's a lot of stuff to do here as well. Some of the things we've been talking about are the ability to hit a specific version of a worker that you have in a gradual deployment.

If you want to be able to test specific behavior.

Also coming soon is going to be the ability for us to be able to monitor your metrics and maybe do things on behalf of users.

So auto rollout or rollback based on certain metrics that users provide us and that we have visibility into.

So yeah, lots of exciting things to come around production readiness.

But really excited to get all of these features out and get feedback from users and see what some of the usage looks like.

Cool. With that, we are wrapped up for the day.

So everyone watching, stay tuned for tomorrow's announcements.

We're not done yet. And yeah, excited to, I think some of us are on tomorrow again.

So excited to see you back here tomorrow. Thank you. Thanks. Bye.

Bye. Thanks, everybody.

Thumbnail image for video "Developer Week"

Developer Week
All the building blocks you need to create & deploy full-stack applications on Cloudflare. Tune in all week for exciting new product announcements and more!
Watch more episodesÂ