💻 What Launched Today at Full Stack Week
Presented by: Kabir Sikand, Albert Zhao, Kristian Freeman
Originally aired on November 19, 2021 @ 8:30 PM - 9:00 PM EST
Join our product and engineering teams as they discuss what products have shipped today!
Read the blog posts:
- The Cloudflare Developer Expert Program
- Workers Now Even More Unbound
- Developer Spotlight: Automating Worklows with Airtable and Cloudflare Workers
Visit the Full Stack Week Hub for every announcement and CFTV episode — check back all week for more!
English
Full Stack Week
Transcript (Beta)
Welcome back to another edition of what launched today at Full Stack Week. It's what day four of us doing this and announcing all kinds of things full stack for developers.
So I am here with Kabir and Albert who will introduce themselves in just a sec.
I'm Kristian. I manage our developer advocacy team at Cloudflare. And you've probably seen a bunch of great blog posts from people on my team shipping all kinds of open source stuff.
So this has been a really exciting week to see all of my developer advocates shine.
Very proud of them. But yeah, there's all kinds of other amazing teams shipping here as well.
And Kabir has been doing a lot of great blog posts.
You want to introduce yourself Kabir? Yeah, I'm Kabir Sikand. I'm a product manager on the Cloudflare team.
I'm working with the workers product specifically.
Great, great week ahead and kind of behind us as well. Yeah, we're still in the midst of it.
It seems like we've done so much already. So, okay, Albert, how's it going?
What's up? I'm Albert. I'm the workers community manager. Soon probably just to be broad community manager because we are expanding the discord too.
A lot of exciting announcements. Kabir has had a very busy week.
And we all had a diverse conversation. Yeah. Yeah, totally. Well, nice to see you both.
Kabir and I have done a couple of these this week and they're always really interesting to kind of hear the perspective of the different authors as we publish things on the blog.
If you haven't seen, somehow, if you have not seen any of the blog posts, but have seen the Cloudflare TV segments, you can go to blog.Cloudflare.com and catch up.
We've been publishing stuff all throughout the week.
And if you go on the Cloudflare TV schedule, you can also go back and rewatch a lot of the segments where the different PMs and authors cover what we've been shipping, all kinds of stuff, workers and pages, and really just across the entire Cloudflare developer ecosystem.
Um, so yeah, we have a short amount of time. And so we're going to try and cover the stuff that we that we released and talked about today.
And we're going to start with our developer spotlight. So over the last couple days, we've been showcasing different developers in our community, some working at big companies and places, you know, and others who are just doing cool and interesting stuff as like solo developers or small teams.
It's really interesting, kind of, I don't know, it expands, like, I'm trying to think the right words for it.
There's, you know, developers of all different sizes and stuff like that.
So, so today is focusing on Jacob Hands. And you can find this blog post.
It's called developer spotlight automating workflows with Airtable and Cloudflare Workers.
Erwin, who's a PM on our team, helps write this up with Jacob. Jacob is also a great community member of our discord who helps out there a lot and shows all kinds of interesting projects he's working on.
And this one is near and dear to my heart as a, an Airtable fan, basically talking about how he uses Cloudflare Workers and Airtable to manage a online store.
Specifically, I like the way that's talked about in the blog post.
It says it's an online store for meat, a very perishable good.
So there's time constraints and all kinds of things that need to kind of happen quickly, or you have a lot of, like, literally rotten food, which sounds horrible.
But Jacob has been using workers and Airtable to build basically this entire workflow tool for, yeah, for managing, I would assume, like, how they process orders, how they ship orders and things like that.
So it's a really interesting read.
Albert, I know you know Jacob as well. Did you get a chance to read this today?
Any thoughts on it? Yeah. Jacob is a developer in the middle of nowhere, Texas.
He has multiple fridges he has to manage. Not like as an industrial worker, more as like their IT person.
And workers keeps things fresh. He's able to keep meat in the middle of nowhere fresh.
One thing he did with workers that was really interesting, I don't think it was mentioned in the blog, was he needs to have, like, a dashboard of the different fridge temperatures where they store stuff.
And he just uses a worker on a cron trigger that just grabs that fridge data.
Because fridges are now computers too. That's neat. It goes to something I think we've covered a couple times in the developer spotlight, which is, like, this idea that, like, when people start building with workers, they tend to reach for it for everything.
That's what that reminds me of. Like, he used workers and used Airtable for this tool.
And then, you know, there's all this other stuff that he would like to do.
And if you need to deploy just really fast functions, great developer experience, it makes sense to reach for workers.
Yeah. And I was going to say, when you mentioned that he lived in the middle of nowhere, Texas, I remember now, I literally learned what the panhandle of Texas is from Jacob.
He was like, yeah, I live in the panhandle.
And I was like, what's that mean?
I live in Texas. I should know that. But it's okay. I'm a Texas transplant. So, I think I get away with it.
But I learned that from him. We used to have, well, still exists.
There's a place called the panhandle in San Francisco, which is where I used to live.
So, that's what comes to mind when I think panhandle. It's just this giant park in the middle of the city.
It's going to be my new excuse that I didn't know.
I was like, oh, it's, you know, I thought you're talking about the San Francisco panhandle.
It's like, I've never lived there. I've been there like three times.
But yeah. So, anyway, I think this is a great blog post. And also, there's a lot of developer spotlights that we, like I said, we've been publishing these every day this week.
And we're also planning more after full stack week. And we have a specific page for them.
If you go to the blog post and scroll down to like the tag section and click developer spotlight, we have this kind of landing page for all the different developer spotlights.
We did one with Chris Coyier from CodePen and CSS tricks, which was really interesting.
And then we did one with James Ross, who's the CTO at NodeCraft, which is a games hosting company.
And then our third one, or I guess would be our fourth one that we published this week is a durable objects driven multiplayer game.
I think it's multiplayer.
It's at least it's an online game. I don't know if it's actually multiplayer, but it uses durable objects to do some really crazy, like data managing, like from a phone gyroscope and all kinds of really interesting stuff.
So definitely, yeah, definitely check all of those out.
You can find those if you just look for developer spotlight on our blog.
And like I said, we'll be publishing more. I guess if I'm going to be a good host, I should also say like, if you feel like you should be in the developer spotlight, if you're building cool stuff, you can reach out to Albert or I in our discord and we'd love to hear from you and feature you.
So yeah.
Yeah. I'll put another note on that one. Just plug for the discord. If you have anything large or small, complex or not that you want to share with the community, it's a really great place and everyone's very supportive.
And there's also really great feedback, even if you're lost.
It's a really great place with a strong community that I've definitely found as I've kind of been developing some stuff on the side too.
Yeah. Yeah, absolutely. Yeah. Work as a community to get your invite to that.
Cool. Okay. Well, moving on to, I already forgot which one we're going to cover next.
I'm going to be honest with you. Take the old behind the scenes.
I forgot which one we're doing next. Yeah. Let's talk a little bit about some of the announcements that we had today with regards to well, one of them was with regards to cron triggers, maybe a little good segue from Jacob's air table integration.
So we've kind of been working and really the whole idea of this week has been around, what's the future of compute look like?
Where is it? The kind of network is computer idea getting a little more strong, especially with Cloudflare's workers product.
So the topic of today's featured post was about really workers just removing more and more of the limits and pushing the boundaries there.
So the first thing I'll mention on the cron side is we've really been trying to work to make sure that it's not just your latency sensitive workloads that you can run on Cloudflare Workers.
It's everything. If you have something that's data intensive, if you have something that is CPU intensive, we can run it on workers.
So the first step really there for long running CPU jobs is for your scheduled workers, you can run up to 15 minutes of execution time, which is a lot.
And if you think about it, what we're doing right now is you can, if you set your cron trigger to be hourly or greater, you're going to be able to use up to 15 minutes of CPU time.
That means you can do something like taking large amounts of data and maybe backing it up.
Someone brought up internally, I might be able to take something from KV if I needed redundancy on some of that data and put it into a Postgres database using the database connectors that we announced earlier this week.
I can do that every hour and take up to 15 minutes to do it if I need to, because it's data intensive and I might need to batch my jobs into that Postgres database.
So that's the type of stuff that can now be done with the 15 minutes of cron execution time.
And that'll be rolled out to more and more of our products over time as well.
But if you're curious about that, you just head over to your settings tab, you select the unbound usage model.
And then whenever you add a cron trigger, you'll see, I think starting tomorrow, you'll see a duration indicator.
So it'll tell you how much time you have for that trigger.
I'm excited to see what you want to build with that.
If you guys have ideas about it, or you don't have the time to build something, but you really, really want to, again, shoot us a message in the Discord, on Twitter, whatever it might be.
We're curious to hear what you want to build with this.
This automated tooling is just really, really cool. I think even externally, I remember having a conversation with a developer who wanted to talk to IMDB's API to pull the freshest whatever data for these movie titles.
He was building an app with us and he couldn't do it at the time.
But now with 15 minute triggers, this would definitely be an unblocker.
I think, yeah, just because we live in a world of APIs now, being able to pull that much data in an automated way with the way you want to pull it, using the worker to parse that in transit, just amazing.
Yeah, certainly. It's kind of timely that Jacob wrote his post today, because it is all about kind of talking with all these different services and APIs and bringing the data together into one place.
And workers is a really powerful way to do that, whether it's on a request response model or based off of a trigger, like a cron trigger.
And so kind of tied into that is if you do have data intensive workloads and you are using the unbound model, previously we had charged egress fees for those.
And that's kind of where we've come from. It's an industry standard.
It's something that everyone else does in the space. But Cloudflare has always been a little bit more about flipping the script, preventing vendor lock-in.
If you guys are curious or you have read, some of our leaders wrote a post earlier this year around egress fees on other providers, specifically the large cloud provider that everyone tends to use, AWS.
There was an analysis done on what those egress fees are and what they actually mean.
One of the key takeaways there is a lot of infrastructure providers tend to pay for bandwidth based on capacity, but they'll bill their customers based off the amount of data they're transferring.
Sometimes a little bit of an unfair model. That tends to lend itself towards vendor lock-in when the fees are really high.
So services where your data lies, you might end up choosing a cloud provider for its compute resources when it's not necessarily the best compute resource for the job that you want to get done, but it happens to be a lot cheaper because it's in network.
What we're trying to do at Cloudflare, we don't want you to feel that way when you're using the workers platform.
So we've dropped unbound egress fees. I know previously in a blog post where we talked about durable objects, we even talked about egress fees there.
That's not happening. We want to make sure that this is the most flexible and affordable platform on the market, and we're moving in that direction.
In that vein, things like R2 storage announced that there's no data egress fees.
There's the bandwidth alliance, not just coming from Cloudflare's data, but data coming from other cloud providers to Cloudflare.
Parts of our bandwidth alliance often offer very discounted or even free egress to Cloudflare.
So it's kind of a big point of conversation, and it was just a very, very obvious change for us, and very welcome, in my opinion, on the platform.
It's really cool.
We are removing egress from our pricing. It's a very strong posture to make.
I did get an interesting question, though, about cron triggers being 50 minutes.
We do, on Unbound, you have to factor data transfer fees. For Unbound, how much does it cost for duration if you run a 15-minute cron trigger?
If you run a 15-minute cron trigger, the duration would be, I think our multiple is based on, it'd be 15 minutes.
I don't know what that is in milliseconds.
I'd have to do some back of the napkin math there. But the multiple is 0.0125, I think.
I'll need to take a look at that. Let me double check. But basically, if you look at, we're allocating 128 megabytes, and duration is calculated in gigabyte seconds.
So basically, you're going to look at that 128 as a divisor of rear one gigabyte, and then look at our gigabit second pricing.
So let me do some back of the napkin math on the side here, and I'll get back to you on that question.
But effectively, the answer is, if you're using an Unbound cron trigger, and even just the Unbound model in general, it starts to get a lot more affordable here.
If you have data -intensive workloads, it makes a lot of sense to use that model.
The invocation charge is, I think, 15 cents per million, plus the duration charge.
So it's a very, very competitive, very affordable model. And you don't have to worry, really, about CPU time.
Most use cases are going to fall well within any of the limits that we have.
Christian, a fun project we could do is build an Unbound calculator down the road.
Yeah. Yeah, we definitely could. But it is cheaper, yeah, than what else is out there.
Yeah. And we do offer some just within the paid plans that we have.
I think there's an amount of gigabit seconds that you're able to use, sorry, gigabyte seconds that you're able to use on a monthly basis.
It's something like 400 ,000. So one other thing that we announced with the dropping of egress fees, and just, again, in the vein of removing the limits that we have on the workers' platform, is one of the limits we've had for a long time is the idea that you're limited to 30 scripts.
We're starting to move in the direction of really lifting that limit. So as of today, you're able to use 100 scripts on the workers' platform.
If you need more, feel free to reach out.
We're always happy to be flexible. The other thing that that kind of means with the advent of services and environments is that those scripts are actually equivalent to 100 environments.
They're not necessarily, they don't equate to deployments.
And just a little bit of a refresher on what that hierarchy looks like now, instead of thinking of your scripts as a single worker, think of them as a service.
And then within each service, you can have multiple environments.
Those environments can be kind of the classic thing that you'd expect out of sets of environments, maybe canary, prod, staging, dev, maybe some other branch that's related to your deployment flow.
And then within each of those environments, there's multiple deployments, one of which is active.
And so effectively, each of those equate to what used to be called a script.
If you have four environments on every service, or three environments on every service, you'll have somewhere between 25 to 33 services that you're going to be able to create on Cloudflare just out of the box.
And alongside that change, just because it's very welcome for the new environments and services model that we've built out, we're also announcing that we're going to be increasing larger script sizes as well.
And there's a whole list of reasons why you might want to do that.
If you're using various WebAssembly things like libraries, like if you want to put Golang in, you can't really do that within one megabyte of a script size.
So we want to be able to increase that so we can allow for more and more use cases on the platform.
And an interesting thing that I've heard from just some folks today is, if you're porting something over from another serverless platform, you kind of have to rethink a little bit of it to fit within that one megabyte limit.
Or maybe you're using libraries that are standard in Node that you want to bundle into your worker.
Those are things that we want to allow you to do in the future.
Yeah, as someone who I think I ran into the number of script limit either before I started working here or very quickly after, because I was just obsessed with workers, this is very needed.
All of this stuff is, but that one for me is like, because I just use a worker for everything, I'm like, oh, okay, nice.
So yeah, that'll be great.
I think a really cool thing in the blog post that kind of blew my mind is the ES build example, where because the script sizes are larger, someone is able to take ES build and run it entirely inside of a worker.
And so there's a little example where you can literally import packages off of the unpackaged CDN and then see them get bundled in running inside of a worker in real time in this VS code editor, which is just amazing.
That's esbuild.developers .workers.dev.
That was the mind blowing thing to me. Yeah, it was a really, really cool example that one of our colleagues on the product team here made.
It's amazing what you can do with just a little bit more space on the worker's edge.
Yeah, and I know that people in our discord constantly ask about like script number of scripts and script size and things like that.
So I think this will be a thing that people are really excited about.
So yeah, Albert, I feel like that is one of the top three questions we get, maybe.
Would you say that's an exaggeration or is that like accurate?
Definitely a top three or four. Yeah. Yeah. We want to make sure we're listening to what the folks in the community have to say and what kind of the needs are.
So if you do have feedback, again, the community get engaged and join the discord.
And if you are curious about that ES build example, you can check out the blog post.
There's a link to that example in the second to last paragraph. And there's also a form you can fill out if you do need larger script sizes while we're in early access to that.
And the last thing about the script quantity, it's good to remind everyone also this allows people to break up their application to just different parts a lot more easily and cleanly.
It doesn't solve for something that we get occasionally requested for when it comes to an increase, which is they want to build a specific config for, let's say, each customer.
You're going to still hit like 100 real quickly.
So we still recommend you store your config stuff in KV and have a general API worker.
But yeah. Yeah. It makes a little more sense, right?
So if you wanted to bundle like a specific script for a worker or a specific worker script for a customer, having a template and working off of that template and putting the customer related specifics somewhere else makes a lot of sense.
Or even doing the conditional routing somewhere up top within that worker.
But yeah. A little bit just on the concept of getting engaged with the community.
Another big announcement that we made today was the developer, the Dev Experts Program.
Albert, you want to talk a little bit about that? Yeah. Totally. Nice.
Very smooth transition. The Developer Expert Program is a way we're rewarding and recognizing our power users who have been building with us and giving us feedback.
We already sent a first batch of invites. It's good to know what you get as a developer expert.
You get frequent meetings with us, people on this call, for example.
And we also give you access to early betas because we really care about how you feel in early development.
The idea, by the way, all sort of came from we ship a lot here.
And some people kind of wonder why we ship and open betas all the time.
It's only because we never want to lock people into an unhappy path when they use the product.
If we build the whole API for everything and then there's a sort of usage mental model we weren't aware of, that person might be in some trouble.
And so, yeah, over this past year, we just started realizing the community was a huge source of us not just squashing bugs, but also giving us direction of what should be prioritized as features first, what would be better or more inclusive happy paths down a product.
So, yeah, this Developer Expert Program, you can apply, by the way. I think you'll find on the Cloudflare blog all the details.
But, yeah, it's hopefully a nice program.
It feels a bit VIP, but at the same time, you get to be a part of how the future of products we build get shaped.
Yeah. I want to kind of double tap on that point there.
The way that we like to build products here is it's very, very important, especially for a developer -focused product, to have our ears to the ground.
What is the community thinking? Where are their heads at? With developer -focused products as well, it's hard to get that feedback sometimes.
I remember back in the day when I was first kind of getting my hands dirty with Node, you kind of get into what we used to call dependency hell, right?
And you go down this weird rabbit hole, and eventually you just give up, and no one ever hears that that happens to you.
They probably never even knew that you were signing up for that product or trying to use their tool in the first place.
So, having programs like this helps us a lot.
Having an engaged community, having feedback, and we do take it seriously.
I like to chat with folks on the Discord every so often when I'm thinking about something and they've kind of had an idea about it and they've brought it up.
So, it's a great way to get feedback from the community and make sure we're building the right thing at the right time.
Also, Kabir, because you're a product manager, I'm sure because you talk to the largest customers, the big name customers, it sounds like it's sometimes a challenge as a PM to also balance it with just an everyday developer who we don't want to exclude as well if we're building the next thing for workers.
Yeah, exactly. One of the things that our leadership often talks about and we within the workers team often talk about is how do we make it easy for the next really, really big social platform or really, really big startup to be built on workers?
And the answer is not necessarily listening to the already large customers.
You have to also listen to the folks who are just sitting in their dorm room or kind of the classic tropes of where do startups come from?
It's a small team or a single person.
And we need to be able to build tools that that are effective for both them and the large distributed teams of hundreds of developers with lots of process.
So it's a kind of fine line to walk and having folks in our developer experts program helps us kind of walk that line a lot more effectively.
I think a big thing also that is kind of the hype around like Cloudflare developer tools, like if people are really excited about, which is great, obviously, we're excited about it too.
But like starting to build out a formalized program where people can kind of get the support to go and like, you know, you see other cloud providers have like a similar kind of thing where like, people go out and write content and stuff.
And we want to be able to, you know, kind of help people out and like signal boost what they're doing and stuff like that.
I see a lot of people that like even in the last couple days where I'll see someone who is like, writing really amazing stuff about like pages or about workers or things like that.
And I was just waiting for today like, oh, like that person is a perfect fit for this program.
Like we want to like, not only do we want to obviously get their feedback on stuff like it, we want them to talk to people like Kabir and like Nevi on the pages team and stuff like that.
But we also just like want to like support them and show off the amazing stuff they're doing.
And, you know, hopefully send them free shirts and coffee mugs and stuff like that or whatever else we can do too.
And Albert and I have been talking about like even doing stuff that is like kind of further down the line that I think is super exciting, like maybe like events and like, you know, exclusive, like mini conferences and stuff like that.
Like they have a lot of really cool ideas down the line that we are going to continue to flesh out.
Because there's, yeah, there's just there's so much obviously exciting stuff at Cloudflare all the time.
And the people like getting our developers involved in it, I think is yeah, going to just be a huge win for everyone.
So. Christian raised a really good point I should have added to like, we're looking to also sponsor open source software work.
And like Christian says, other cloud providers have, you know, like invest in frameworks.
And we know that workers makes it really easy to just develop the next thing you want on JavaScript.
And that's something we're hoping people apply for in this program too.
We don't have like everything formalized yet, but we are growing as a resource.
And I should also add this whole program was also Christian's brainchild.
I just sort of like carried all the recipes out.
And, you know, we're almost at time, but like a quick review of what you get is you do have early access to features you meet with and some people that could be here and then you get admission to a private community of power users.
And we routinely will stay in touch and down the road sponsorships for OSS work and swag, of course.
And I know we only have a few seconds left here, but we did get a question come in.
What are your favorite tools and IDEs for developing worker scripts?
I recommend Wrangler and Wrangler 2.0, if you're curious about the next version of that.
I have a lot more that I could say on that, but you can find me, KabirCF, in the Discord and in the chat.
**JASON LENGSTORF** Workers.community to get your invite.
**BEN HONG** Workers .community.