🚀 Launch Day @Birthday Week
Presented by: Jen Vaccaro, Nancy Gao, Greg McKeon, Aaron Lisman, Steven Raden
Originally aired on September 28, 2020 @ 12:30 PM - 1:00 PM EDT
Join leading product experts as they share the new products being launched each day of the week as part of Cloudflare's 10th Birthday Week celebrations. Watch product demos, and submit your questions live on the air.
Call in with your questions: (380)333-5273
English
Birthday Week
Product Launch
Transcript (Beta)
Hey everyone, thanks so much for tuning in. This is our first announcement for birthday week this week and we're very excited to have it be workers focused.
So we're here today with Nancy Gao who's one of our product managers focusing on cron triggers and also Greg McKeon who's one of our product managers focusing on durable objects.
And then we've got Steven Raden and Aaron Lisman here on the engineering side to walk through a little bit on some of these announcements.
So thanks everyone for joining here today.
We're really excited to share what we have with workers.
So we thought we could split the time here. We'll spend the first about 15 minutes with Nancy, Steven and Aaron talking through cron triggers.
We'll go through a little bit of a demo and review some of the benefits and use cases with cron triggers.
And then we'll spend about the last 15 minutes with Greg and talk through durable objects.
All right. So if we can get started, Nancy, maybe you can give us a quick intro on what are we announcing today with cron triggers.
Yeah, thanks for the intro Jen.
Today we're super excited to announce cron triggers. This is a highly requested developer feature that I think a lot of people will find useful.
I mean, we haven't got a lot of requests from it, even inside Cloudflare, but also from developers like we've seen your tweets and everything.
So what a cron trigger is, is the ability to set a worker on a recurring schedule.
In the past, we only supported workers being triggered by HTTP requests.
But now we're moving towards more of a, we've introduced cron support for that.
So we think that it'll be super useful for developers who need to do regular maintenance or recurring jobs or any of that type of events.
Great. And then if you could walk us through some of the benefits that our triggers will have for our customers.
I think one of the coolest benefits that we have in this feature is that it's at no additional costs.
So a lot of our competitors will charge per trigger or have a very limited free tier or restricted somehow, but we're not adding any additional per trigger or per schedule costs for this feature.
So you can use it as free as much as you like.
It's still subject to the typical request limits that you have on your workers account, but we're not charging extra for it.
That's great.
And then I also know that we've talked about the smart infrastructure that the cron triggers will be running on for cold colos, which is kind of unique to us, right?
Yeah, I think it's really exciting because one of the reasons that we're able to offer this feature for free is that we're running it super efficiently.
So for example, if you make a request from it's like 930 in the morning, San Francisco time, the request will go to the other side of the world to a city where it's currently nighttime and traffic is low in that area.
So in that way, we're able to use our infrastructure in a really efficient and cost effective way.
So that's great.
And then, Stephen, I think you were going to share a little bit with us.
What new use cases does this enable for our customers? Yeah, I mean, overall, to replicate Nancy's kind of enthusiasm is we're very excited for its functionality.
And I think some of the things that cron triggers is going to bring to the table of workers is just being able to provide our users a lot of the benefits of time, as well as hopefully confidence in the reoccurring scheduled behavior of these workers.
So some of the common use cases, though, that we can think of is, again, like enabling just automation to save them time and do these repetitive tasks.
And that can include weekly backups of databases, renewing their SSL certs, keeping their sites cash fresh, as well as pinging sites to just check for uptime or queuing things like emails to batch and send off.
One other kind of use case that we're really excited for is, again, because of this consistency and accuracy, these sites, there's a lot of sites that are trying to make sure they have updated state and hospital recommendations for things like COVID-19.
And so we're hoping that this can really enable those kind of products and make sure that people are getting the information they need when they need it and that it's accurate.
So overall, regardless of all these use cases, people have built really cool stuff on workers already.
And so I think this just adds another dimension of capabilities of the product.
So we're excited to see what people can do with it. That's awesome.
Yeah, it'll be it'll be great to see. I know a lot of our customers and even internally when we've passed around feedback, we've gotten a lot of people asking like for these, you know, periodic recurring events and backups and all those type of things that kind of to do required some hacks or, you know, using an alternative.
So that's really exciting to see for dog fooding purposes and then for our customers as well.
Exactly. Great. And Aaron, I was hoping that you could share a little bit about the architecture and go through a bit of a demo.
Sure. Let me share my screen real quick.
So you should be seeing this is the architecture diagram from our blog posts.
And it's just a quick overview of how we get these schedules to the edge and how we run them.
It starts, of course, with an API request. This could be from Dash or eventually Wrangler.
We hope to have Wrangler support for this soon. And then our API stores the schedule information into Postgres where our scheduler picks it up.
It's just running continuously in a loop checking for new schedules or schedules that might be out of date and need to go to a new colo.
And this we it's pretty simple right now.
As Nancy said, we just choose a colo that has low activity. But we can add logic to this and do a lot of really cool things like check the latency on sub requests from your worker and place it where it will have the lowest latency to any APIs you might be calling.
And so once we've decided which colo it should go on, we push that to the edge using Quicksilver, which is our distributed data store.
And then we have a new service running on edge nodes that checks these schedules and calls your worker when it needs to go.
And then I have a quick demo set up to show too.
This is our new schedule tab in our workers editor.
It's got a little snippet here for you. And you can see we have a new event listener.
So instead of handling fetch events, you're handling scheduled events that just have a scheduled time on them.
There's no HTTP request here. And then you can call a function like normal.
Here we're just sending a simple message to a Slack room.
So we're using a Wrangler secret here that has our Slack hook URL in it and we're just posting the message.
And I have this configured again in our new triggers tab.
I added a cron trigger to run this every minute and just post a message to the Slack room.
You can see it's been running for a while every minute.
And you can do things with this. Like Stephen said, check the status of your website.
Make sure you're not down and send an alert if you are and that sort of thing.
You know, Erin, this is Stephen again. That was actually another prompt that came up in a Twitter post recently that someone was using or planning to use this to send messages to their girlfriend every morning and evening.
Oh, I saw that. That's a great demo.
Yeah. You can outsource the good morning and good night to your significant others.
It's definitely one we hadn't thought of. And certainly it's not in our docs, but I thought that was pretty funny too when I saw that in the Twitter thread.
Thanks, Erin, for walking us through that a little bit.
I see that we are getting a question coming in, particularly for Stephen. And the question is asking about Stephen's comment around consistency and accuracy.
Can you talk more about that?
What if my worker runs every minute, but an outage causes it to be missed for 10 minutes?
Well, and I'm going to open this up to the team too, but I mean, I guess what I'm saying is it's hard to avoid or have, you know, if there are issues like that, there are certainly building in redundancy and things like that to make sure the system works.
But what I was mentioning or what I was meaning with consistency and accuracy is it's, you know, the way that we heard people trying to achieve the same functionality prior to us releasing Chrome triggers is using some third party services or things like that, that just, you know, I don't think they operate or work as well as this inbuilt system.
And so, you know, as long as the system works, which our system works really, really well, that these things can, you know, will provide that again, that consistency of scheduling and, you know, just, and then they'll be able to update and make sure that their program is running well.
And I can add to consistency and reliability is definitely a goal and something we're still working on.
So the scheduler right now, when it picks which colo your workers should run on, it gets a list of disabled colos that our SREs know are having problems and it won't put it there.
And that should be able to move within a minute. And we have services running on the edge to make sure that the schedule runner is up on the time.
If the node it's on goes away for whatever reason, it'll bring it up on another.
So we're doing our best to provide at least once.
So your worker may run more than once, but we're doing our best to make sure it runs at least once on its schedule.
Great.
Thanks, Aaron. And are there any limitations to, I know in Nancy's blog, there was a limitation on year, like you couldn't put this schedule this out for like a year of recurring, but are there any other limitations?
Yeah, right now we have a limit of three schedules per script.
So if you need to call it more times, you'll need to use another script for that.
And we don't let you schedule with seconds.
So okay, maybe we could do that someday, but it's limited to minutely at most for now.
Okay, minutely. Okay, great. Are there any anything more Nancy, Stephen, Aaron, that we want to share here on scheduled workers or future ideas that we have that this might be leading to on triggers before we go into talk about durable objects?
Yeah, we designed cron triggers very intentionally to be flexible and hopefully extensible for the future.
So I think it's really exciting that we will probably be releasing new types of triggers in the future.
Like I can imagine use cases for event-based trigger or database trigger related events.
And we haven't started working on that yet, but please stay tuned for next year's when those things will be coming out.
Great, great. So it sounds like this is just getting us started on the world of what we can be triggering.
That's great.
All right. Well, thanks, Nancy and Stephen and Aaron. We'll stay, you know, definitely on the line.
And then we'll go to Greg to talk a little bit about durable objects.
I do just want to say really quick that for folks tuning in, feel free to send us in questions.
I think there should be a link in the Cloudflare TV live page that you're viewing this.
So feel free to send those questions in or dial in for those questions and we can share them here.
And great. So Greg, let's hand it over to you and talk about durable objects.
And can you get us started by explaining a little bit what are durable objects?
What are we announcing here? Yeah, definitely.
So we're super excited about durable objects because this is sort of our answer to consistent state and coordination at the edge, which is something that people have been asking for in the workers platform basically since we launched.
And one of the things we really found out here is how easy we've made that to use and how we've kind of tried to match the solution to how developers write their programs to that.
And so this is one of the hardest problems to solve, which is how you globally distribute data in your applications.
And we're just getting started here.
So this is sort of the initial the initial implementation we pushed out.
That's great. And in this, if we haven't clarified, the durable objects will be for beta program and then cron triggers is available today with workers and GA.
Yeah, exactly. Great. And Greg, can you talk us through a little bit more details on some of the benefits that this will provide our customers?
Yeah. So as I kind of mentioned, this is all about strongly consistent state and coordination across your workers.
So the great thing about workers today is that you can click deploy once and your application code runs across all 200 plus providers points of presence just as easily as it would be if you were deploying an application to a single data center on another provider today.
And this has sort of been a challenge for state though, right?
Because now you have your application running in 200 different data centers.
And if one of your applications wants to make a change or do you want to coordinate some change across those running workers, there hasn't really been a great way to do that.
We've had workers KB, which has been good for storing state globally.
That's got some of its challenges around eventual consistency of data and the fact that it enables last right wind semantics.
So that means if one data center makes a change, one worker in one data center makes a change, another worker in a different data center makes another change.
The one that arrives last will be the only one that's reflected in the database.
And they might not know about each other.
So this has been a problem for building applications that need strongly consistent semantics.
So now with durable objects, what you can do is in your worker, you can essentially make direct all of your requests to a specific object that's running in a single Cloudflare point of presence.
And this gives you a coordination point, which means you can do in memory coordination amongst those requests, but you can also synchronize reads and writes storage.
So you get access to a transactional storage API where you can access strongly consistent storage.
And this makes it super simple to implement these sort of complex mobile applications that if you're building on a different serverless provider, it would be really complex to build.
So as an example, you know, we built an IP address rate limiter in about 20 lines of code, right?
Because Cloudflare gives us the incoming IP address, and we can coordinate checking the number of incoming requests on a specific durable object.
And because all those requests are typically the same place, we can run all that logic without any of the coordination piece being sort of abstracted away from us.
Yeah. That's great.
Thanks for walking us through some of that. You know, one thing that I think would be interesting is you've talked about the strong consistency and the single point of coordination.
It would be great if we can talk through a little bit some of the example use cases we're seeing.
And something that really struck me when I was reading through the blog that we announced today was the sentence that Kenton wrote saying, durable objects are the missing piece in the worker stack that makes it possible for whole applications to run entirely on the edge with no centralized origin server at all.
So this idea that like, you know, all applications can run on the edge through workers is pretty significant.
So maybe you can talk us through that a little bit what it means, and what kind of use cases we could be seeing that were not available before.
Yeah, I mean, I think the best way to think about this is sort of as a coordination primitive.
So you can start from thinking about it and what it would enable for an application that's running today.
And then from there, you can sort of see what would happen if you build your entire application on top of this.
So you know, if you imagine that you have, you know, maybe you have a document editor today, and it doesn't include live collaboration, right, you have some application that needs to go right to some back end service, and then it can present the document back once it's done that.
But there's no real like Google Docs style live collaboration in your browser.
What durable objects will let you do is basically provide that experience, right, to layer that on top of your existing application and have all the coordination run through durable objects.
Yeah, so that's sort of the incremental use case you could build out when you build your entire application on it, you know, you sort of make use of the actual storage we're giving to you as well.
So you can start sort of in memory, and then have these deferred calls to your own back end server.
And then when you're ready, you can move your entire application or the durable objects actually store the full state on top of that consistent storage.
So you know, Ken's line, it really is a missing piece, the things you can build with those are sort of limitless.
And yeah, it really is a great step forward to the platform.
And could you explain like some of the examples I Kenton had written about, and I know you had also provided the example of like the bank, you know, if you like are overdrawing, like, you know, your server gets hit with a $40 request, and then a $20 right after, and then you only have like, let's say $60 in the bank.
I think that kind of helps visualize or like explain a little bit if you could walk through that piece.
So we can we can sort of tie together the consistency piece with how KB works today.
So workers KB is globally distributed. And what workers KB really does is it caches your data in each of Cloudflare's points of presence, and then it writes your data back to a central store that is actually the back end store for KB.
And the problem is, is that those caches refresh, you know, on schedule, so about once a minute.
So it's possible for a Cloudflare data center or a worker running in the Cloudflare data center to see scale data with KB, and then make a change based on that scale data.
So sort of the like textbook example, you know, is if I was building a bank account on top of KB, it would be possible for double spends, right to be possible for someone to access the bank account in Portland, say from some e commerce site access in Atlanta at the same time.
And because Portland and Atlanta didn't know about each other's requests, they can both make a request and both update your balance and, you know, basically allow a charge that should be defined or something like that.
And it's important to note that, you know, those semantics are also why KB is great for globally distributing data and also great for, you know, read mostly use cases.
Those are the downsides, these are sort of the fundamental downsides, it's not a problem that's unique to KB, it's true of any sort of globally distributed database.
And it means that KB still has, you know, its place, but when you're implementing something, you know, like that, that toy bank account example, where you want to synchronize the operations, and you want to guarantee that every operation knows the results of the other operations before it proceeds, you need something like durable objects to provide that.
Why don't you talk through the beta a bit? Yeah, we can talk about what we're doing.
So this is sort of a new muscle for us. For Clive Flair, like, we have KB, obviously, but with hosting data, running a system of this scale is sort of a new muscle.
So we have a we have Ken's blog, which is out today, we have the docs, there's a link of both of those to a form to kind of tell us a little bit about your use case to sign up.
And we're going to be launching into a limited beta, where we work pretty closely with the people who are actually using the product.
Just to see, you know, it's a bit complex to explain.
And just to see how people actually engage with it, what they build, and what they need from us to make it even easier to build on top of durable objects.
So the beta will be limited, and we're going to start to open it up more.
So make sure you tell us about your use case. Yeah, yeah.
And for those of you who have read the blog, there is a link right in there to sign up.
And then also, even if you just Google durable objects, it's one of the first that comes up where you can go and fill out, fill out the form.
And then our team will start reviewing all of the use cases that come through and all of the requests.
And Greg, do we know how many we'll be targeting initially? Like what number of initial users we'll be looking at?
Yeah, we're not, we're not really sure.
I'd say it depends sort of on the use case. We're going to kind of look at ones that we've thought through initially and make sense.
And so it's pretty much I think it'd be more use case based on any sort of set number.
But it'll be pretty small, honestly.
So don't give up hope if you're not in, you know, in the first week.
Yeah, yeah, I saw this morning, several hours ago, so I'm sure it's only increased, but we had about 45 signups within the first, I think, 30 or so minutes.
So we're already seeing some interest, which is which is really exciting.
Yeah, I think that speaks a bit to if you've used workers before, or you sort of run into this issue, I think, where, you know, it's a where do I actually put the data that my users have been great for use cases that don't require state, it's been great.
If you can store your state in KB, like if you have configuration data for the worker or something like that, when it comes to actually building your application on top of workers, it's been a bit difficult.
It's been more, you know, sort of bolt on features.
And I think it's interesting that you can use this, you know, sort of as a coordination primitive to make your applications better today.
But then you can also, you know, actually go and build a full, complete application of the state.
That's great. And I it looks like I can share my screen now.
And I do think it kind of helps visualize for folks on the line since it is it is new.
And I think just articulating it, and having everyone kind of understand it, the visual seems to help a lot.
Can you see my screen? Yeah. Yeah, maybe you can walk us through the key thing this slide sort of trying to discuss is the the consistency that you get kind of using durable objects.
The idea being, you know, you have a worker running, you know, sort of like the example I gave before, if you have a worker running in Portland, you have another one running in Atlanta.
And they're both trying to decrement Alice who doesn't have that much money in her bank.
She has about 70 bucks. And they're both trying to charge her, you know, for a total of charges that adds up to more than that.
And the real point to take away here is that these operations coming in from the worker are synchronized in a single durable object.
And this slide cheats a bit because it doesn't show you where the durable object runs.
We'll actually run it nearby the requests that are coming in today.
But you know, this durable object runs in one single location, it's guaranteed to be globally unique.
So then it's possible for you to do the actual charging logic, the logic that decides whether you should allow the charge or not in this one specific place.
And so by doing that, you know, you can guarantee that your updates to the storage are consistent, and you can guarantee that you don't incorrectly return success for both charges as they come in.
Thanks, Greg. And what would happen if both went through at like the exact same time?
They'd still be synchronized, right? There's you're guaranteed to have some ordering across those.
I think one other thing that's interesting to talk about here, you know, is the ideas and how, you know, how you actually talk to these durable objects.
So the idea is, you know, you as a developer will go out and write a class.
That class will export a handler that it's basically just a fetch handler, just like any other worker would export.
And then you can actually go and make a request for the object with a specific ID.
So I would have a general class, you know, your name, something like bank account, and then I'd be able to request, you know, the bank account, it probably wouldn't have the ID of Alice, probably have a unique ID.
But I could go out and actually request the ID associated with that durable object to speak specifically to that object.
And what this means is that durable objects scale really well across IDs, right?
Because they're essentially built on top of isolates, they have almost no overhead.
So there's really no problem scaling to millions or even billions of durable objects in your application.
It's really within a single durable object that you have, that you have potential performance issues.
And so you want to scope those durable objects as small as possible, right?
That's why in the banking example, you know, that we've given here in the chat example, it's scoped, you know, to a single chat room or to a single user, because that's really where you see the power of durable objects, how they scale out.
Great, thanks for talking through that a little bit.
I'm going to stop sharing. Great. And so Greg, a couple of questions that we had from folks on the timeline for for like, when we'll be reviewing use cases when people will kind of start be able, you know, being able to use it.
Do we have any inclination there? What we might be doing? As far as timeline for beta?
And for ga? Yeah, I mean, I think it's going to kind of be a collaborative process, you know, mentioned a little bit about going into a pretty limited beta at the beginning.
We're really confident in the system that we built.
But you know, you always find out things, right? Spending some time to figure out what people need, what they need us to build.
And you know, where they want to see the product scale, like what what is appealing to them, and what they need to really build a full application on top of workers.
And those are the things we're going to build, right?
We're going to prioritize those and keep building from there.
So yeah, I mentioned the beginning, you know, this is sort of day one, there's a lot of different directions to take this right.
And we're going to kind of leave it up to our customers to tell us what they need.
And another question that I'd received was around limits.
Do we know what at least for the beta our limits will be?
Yeah, so we're going to have pretty limited sizes. So the idea being, you know, if you're going to have to store large values, that's probably a case that's better suited to workers gaming.
So relatively smaller on the value size and smaller amounts of storage, at least to start, just as we're, you know, trying to get a lot of people on the platform at once, and we're continuing to build out the storage capacity we have.
Yeah, and then going back to the scaling piece, again, you know, great scaling across different instances of durable objects, and needing to be a bit mindful of access to a single object.
But we're still pretty happy with the performance we're seeing there.
And you know, tons for us to do on the performance side, as always.
Great. And one other question. So this is going to be available, at least with beta, of course, and then early GA just for workers, right?
But maybe we could be extending this for outside of workers users.
Yeah, so extending outside of workers users.
So this to use durable objects requires that you're already using workers today, right?
It's not going to be its own sort of product, if you're not.
Yeah, the way you access a durable object is through an edge worker still.
So you still communicate and your request comes in and is handled by a regular worker, and then it's routed to a durable object.
There's no real way to access a durable object without that worker today.
It's something we're looking into in the future, you know, maybe you'll be able to do bulk import or export of data or from another system or, you know, push it to another system and a little bit weird or something like that.
But yeah, today, you kind of start with workers.
Great. Cool. So we have only one more minute. And I do want to give a sneak peek to some to those on the line for what we have coming up for birthday week.
So we have some exciting sessions. I just wanted to share for a second on fireside chats that we have.
We have Eric Schmidt, former CEO of Google, he's going to be on discussing birthday week at 1230 pacific time we have Eric one coming on later today or at 330.
So stay tuned on Cloudflare TV for these upcoming sessions.
And I think we're probably just about out of time. So thank you, Greg and Nancy, Aaron and Steven for sharing about these exciting announcements.
And we look forward to seeing what our customers will be developing with these with these new these new features and platforms.
And we'll be looking forward to see what additional features we continue to add to workers.
So thanks for tuning in today. Thank you.
Bye. Transcribed by https://otter.ai