Talking Serverless with CodePen Founders
Presented by: Rita Kozlov, Chris Coyier, Alex Vazquez
Originally aired on December 5, 2021 @ 6:00 PM - 6:30 PM EST
CodePen Co-founders Chris Coyier and Alex Vazquez talk with Workers PM Rita Kozlov about serverless trends and how CodePen is using Cloudflare Workers.
English
Serverless
Cloudflare Workers
Transcript (Beta)
Hello everyone, my name is Rita. I'm a product manager working on developer productivity at Cloudflare and today with me I have Chris and Alex.
Thanks for having us Rita.
Yeah, thanks. Yeah, super excited too. I think a ton of people are familiar with you guys either through CodePen or through CSS Strix, especially since that's how so many people get started with web development these days.
So I would love to, I've definitely been a fan for a long time and would love to hear a little bit about how you guys got started even working on such a cool project that I think has enabled so many developers that are just starting out.
Yeah, cheers.
I mean the product part of that is CodePen, like if you use our thing to build stuff and that's what Alex and I work on together.
We've been working together for a hot while.
Yeah, we're three minutes later. Yeah, and CodePen, you know, we were just chatting about this.
It was fun. CodePen started as kind of a weekend project or at least that's what I kind of convinced Alex of it.
And it was a way to put demos on CSS Strix.
Yeah, do you remember those days? Yeah, I mean as a, you know, I have my developer hubris and I was pretty sure we'd be, we'd finish this thing in three or four months.
Six months tops and it's, you know, we're going well on eight years and still not quite done.
So it's fun. Yeah, it's been a bit.
Just a little bit of scope creep. We're talking about CodePen and it's like, you know, it's an editor in the browser with kind of a social layer to it.
So people can go there and write HTML, CSS, and JavaScript and see what you're doing real time-ish, you know, kind of stop typing for a second and see the results of that.
And that's been really fun for a lot of people because it's, you know, plus because it's browser based, so you can't lose it.
You know, you get a URL to it. Other developers can follow you.
They can heart it and comment on it. It's got this kind of element of fun to it, which is great.
But of course there's, you know, we have, you know, millions of users who've created tens and dens of millions of pens.
So now it's become this resource that's like, if you're looking for a way to do something in front end, it's there, you know, like probably 50 times over, which is kind of cool too.
So there's a lot of like lurker users too that just think of it as a place you come to look for 50 examples of tabs, you know, because we got them over there.
So that's been kind of fun. You know, in the early days, you know, so for one thing, it's not just HTML, CSS, and JavaScript.
There's more these days, but even in the early days you could like write Sass.
Sass is like a super popular pre-processing language for CSS.
And, you know, we would, you know, somebody would, not me, somebody smart like Alex, would spin up a server to process Sass.
It like would have one job. And we thought we were so smart for doing this, you know, let's go spin up an Amazon server somewhere to has this one job, which is to process our Sass.
So that's, you know, times are changing these days with how and why you would do that.
You know, that was kind of a softball to get us into serverless stuff, if you want to think it.
Yeah, definitely. I mean, I'm actually curious, what was the itch that you were looking to scratch when you built CodePen?
Like what was it that wasn't out there? And there's certainly, I feel like nothing quite like CodePen, but from your perspective, what was the thing that was missing?
You're more like in tune to your to your competitors when you're us, I think.
CodePen was not the first of its kind. There was other apps that were similar to it.
I think of JSBin and JSFiddle, the early ones, you know, very similar in its approach, you know, it's more different these days.
But, you know, and I loved it.
I had and still do have a JSFiddle account. And on CSSTricks specifically, I would make demos.
Here's how you do, you know, rounded corners that have roundout tabs on the bottom or something.
And then I just make an HTML file and a CSS file and FTP it up somewhere and just link people to that document.
Here's the demo, go look at it, view source if you want to see how it was done, you know, and then along comes a site like JSFiddle.
I'm like, this is a way better way to show somebody some code, way better.
So I had this moment, like, should I go back and take my hundreds or thousands of demos and port them all over to this app?
Well, you get this kind of weird feeling like, but I don't control that app.
They have no business plan. You know, like, what if I, I'm a publisher, too.
I run a business. What if I want to put an ad next to it? I should have the ability to do that, you know, if I want to kind of thing.
That was the early days. And so it evolved.
So I, that's how I roped Alex in, you know, help me out, man, you know, use your skills.
And what was the thing that got you? Yeah, I mean, honestly, I just thought it was a cool technical project.
You know, at the time, we were kind of working at a, at another company together, but we were, this was just kind of a side project.
And at the time, I remember a big part of it was processors themselves were like a new idea, the idea of transpiling from one language, but then executing in another language was fairly new, it was new to me.
So the whole idea of SAS and SCSS, and I don't even think Babel existed at the time, at least I wasn't aware of it.
So that was kind of interesting. I was like, Oh, variables in CSS, of course, that totally makes sense.
How many who wants to rewrite the same color, you know, 10 times in the same CSS file.
And so at the time that that was kind of the interest, right.
And so we were AWS was new to us at the time, you know, I think EC2 launched in like 2009, or something like that.
And this was like 2012.
And so it was still new to developers, the idea that you could, you didn't have to buy a rack of servers dedicated to running your code, which was, you know, mind boggling.
So it's like, yeah, let's play with these APIs, let's play with this with the cloud, right?
Like there was the beginning of the cloud for us, in our world, I'm sure there was people really early adopters that had been doing it for years then.
Um, yeah, so it was, is it mostly still running on EC2? Is that how you guys got started building it?
Somewhat. I mean, yeah, I mean, these days, we do run plenty of like web servers and things like that for like, saving apps and stuff like that.
But the way we do processing, it's all handled through serverless.
These days, it's, we deploy our serverless processors on Lambda, and we shoot those things over.
So gives us kind of like the security layer and isolation, but also the speed of being able to execute things randomly.
And so that's how we've handled it today.
But we still have plenty of things running on EC2.
I remember the days of it, we're kind of like, we have, first of all, these languages are like, some of them are written in back-end languages, or all of them are to some degree, because a lot of them are in Node.
But some of them are in Ruby, you know, Sass was originally Ruby.
And so you're, you're doing the one thing you should never do on a web app, which is like, hey, come to our app and just write arbitrary code, and we'll execute it for you on our website, you know, which sounds like incredibly foolish, just for security reasons, you know, like, why can't somebody write code that says, well, then ask the server for its security keys and email them to me, and then I'll use it to Bitcoin mine or whatever, you know, which obviously lots of people are trying to do.
I'm sure many people are trying to do that every day on Code10.
So security is a big deal for us, you know, and I remember the early days of serverless, the promise of being like, well, why would we spin up a server to do this?
Why don't we have some little isolated server that has no abilities outside of itself, process this code.
So aside from the fact that it was cheaper and faster and easier and all that, but more secure, like you go down the checklist of stuff that serverless brought for us, and it's like, all those things are amazing.
How could you not be interested in this? Right. I'm curious, were you, did you have any skepticism at all at first?
Like what was do you remember the first time that you heard the term serverless?
And what were the thoughts that came to mind for you?
Did you think it was going to be useful to you?
And were instantly on board? Or did it take some warming up? Yeah, I'm ironically a bit of a technophobe.
So I kind of have to the moment I hear about anything new, I'm like, that makes no sense.
First time I heard you could run a function.
I just did not get it. I felt so old. I don't know how old I was at the time.
But I was like, I think this industry is passing me up. It makes no sense.
You're going to execute one function. I upload a zip file. And I just didn't understand it.
Ironically, once I start seeing the use cases, and this is someone who literally runs code on behalf of other people, which is, I feel like is the perfect use case for serverless functions.
I didn't get it. I didn't understand how it would apply.
It didn't seem like you could do that much. But then, when you realize that you can connect to all these events that are happening in the cloud and react to them and take action intelligently, it things started to click.
And I kind of feel the same way about Cloudflare Workers in the sense that when I heard about them, too, I was like, okay, well, I'd use something like Lambda Edge, and that takes like 30 minutes to deploy.
And so, you're a little, you know, that's a very frustrating experience.
I've kind of moved away, completely away from that at this point, if we ever need to run anything at the edge, we run it on workers.
But initially, you don't, until you start seeing the use cases for why you use them and how you use them strategically, it, I've always felt like either edge workers, Lambda at the, or serverless at the edge, or serverless on at a data center, neither one of them made any sense to me.
And today, we rely on them everywhere and are only investing more effort into building on top of them.
Yeah, I can't remember the last time you're like, let's spin up an old school web server.
That's like not in our mind bank anymore. It's only as piecemeal moving stuff off of them when we can, you know, we've got a database running out of serverless function the other day.
Even that's gone serverless, which is still mind blowing that that's even possible, but it is.
Yeah, I mean, we realized almost as soon as we launched workers that state and compute got to go hand in hand.
So you got to have one to have the other. That's why we released KV. Yeah, I was gonna say that to me is like the peanut butter and jelly of serverless.
The fact that you released your workers with the KV store, that's been one of the biggest things that made it easy to adopt.
Because at some point, we needed to manage some kind of state.
And so you go, okay, well, what can we do that can manage that many connections?
And instantly, as a developer, you're like, well, maybe we could use Redis, maybe we could do this, but then you get into latency and how long you have to respond to this web request, right?
And so when Cloudflare provides you this KV store and says, Hey, this is literally this is made purpose made for this environment.
So like, feel free to use it. Because you'll be able to respond to requests in real time.
That that was a game changer for understanding how to start implementing and using relying on workers for us at CodePen that that made a huge difference.
Do you remember what the first worker was that you deployed? That kind of made the whole thing click for you?
Yeah, I think the screenshots actually were the thing.
So there's this pattern that I've attempted to, I've recreated at different levels of success.
And this is by far my most successful, but it was kind of like the third iteration where the pattern is, I want to have a CVN, but I want to generate the content for that CVN dynamically, which is kind of like it feels like a bit of an oxymoron to be like, I want to generate static content dynamically.
But that's kind of what we do, right?
So we take screenshots of our pens for like, kind of previews and unfurling and bunch of different little reasons.
And so what we did was, when we send the request, we check the KV store to see if we've already taken that screenshot.
And if we have, we just allow the request to proceed because we know that it's going to be either in the cache or the origin.
But if it doesn't, if it's not in the KV store, we know that we need to send a request for to generate the screenshot, which takes quite a bit of time.
It takes like almost three seconds. But in that time, we're generating the screenshot.
And our users are only the first person who's hitting that request is going to get that delay.
And then after that, no one else gets the delay because we've generated the screenshot, we just kind of respond with what's in cache or the origin.
And so that's that use case, I've tried to recreate that for all kinds of things for I kind of look at it as like kind of recreating what Cloudinary does for images, but for other purposes.
So like, we've built packages when it's kind of a feature where we're working on for a bit and still are in development for but screenshots is is this example, but following that pattern and being able to generate that for our workers has been great.
It's just like it hits this URL. And it's like, do I have it? Yes, serve it.
No, generate it. There's no like middleman. It's like, incredible. It's such a clever pattern.
We think. Yeah, it's it's I love that pattern, because it adds a lot of simplicity for us.
You know, like you don't have to, I don't honestly, unless you have something working at the edge that has some form of state that manages that it becomes difficult, you end up having to go all the way the origin every single time to check it.
And that's where the KV store was huge for us to be able to do that and respond in the amount of time that's, you know, we didn't want to have a huge delay for this stuff.
Otherwise, we're pre generating a bunch of screenshots all the time.
And we're not which is what we used to do, you know, we just constantly be churning through all these screenshots and maybe no one saw those screenshots.
So you just, you're doing a bunch of work for no reason, but you're, you know, you're prepared for that.
So that I really love that pattern. I think that's such a cloud based pattern that wasn't a bit like six years ago, you couldn't do that.
You know, like, I feel like when you first hear serverless, you're required to go through this little like existential clever guy moment where you're like, but there's still servers required that you have to go through that.
And then you have to get over it and move on with your life.
But now finally, it's starting to feel like maybe there isn't servers, you know, like workers don't really feel like servers.
Right? Lambda does. Yeah. Yeah, I mean, it is. I think of it as my personal job, sometimes to make you think about servers as little as possible, or, you know, hopefully, the next generation of developers won't even know that they're there, won't think about them at all.
And that word will just really confuse them.
It's an admirable goal.
Right, right. I like it. Um, one thing that I thought was interesting that you mentioned, Alex, was talking about using lambdas and then transitioning to lambda edge for certain things versus workers.
For us, we definitely I think there's this perception generally in the industry of like the edge is intended for only very specific purposes or for certain use cases and more and more.
What we're seeing is actually you can do most things on the edge that you could do centrally.
And that was the intent behind some of our announcements this week as well, especially workers on bound allows you to run all these meaty workloads that I think generally people don't associate running on yet with running on the edge.
So I'm curious to you what's the distinction between something that you would try to put in a worker versus not.
Yeah, so I feel like I'm still wrapping my mind around what to do in that instance.
So I guess I could start from the perspective of like some of the things that hold me back from putting certain logic on a worker.
And, you know, when you're in your server environment, you usually you've come through an authorization layer, you've come through a security layer.
And so figuring out how to start doing the authorization on the work and realizing, hey, I can manage my state on the worker, and I can validate authorize this request.
So for us, it's not like we're updating a database on the work.
We're kind of like doing we're literally doing work because we're taking the user's code, processing it, transpiling it, whatever it is that they requested, and then returning it to the browser.
And a lot of that work ends up being it's fairly stateless, but we don't want to expose it to the world.
You know, we there once we want there to be some form of like authentication that this person is coming from CodePen and has valid session.
So starting to realize, okay, we can move a lot of that work closer to our users, like with when they have like a 20 millisecond latency and realizing how powerful that is, you know, I think we've done we followed this interesting pattern where we live near our servers.
So we don't we happen most of CodePen servers are in Oregon. And you know, Chris lives Oregon, I live in Seattle.
And so we're like, Oh, my, I mean, CodePen's just as fast as it gets.
If you live in Oregon, or Virginia, the Internet is so fast.
Yeah, the we're like, what's what's all this latency people talk about?
So I'm still wrapping my mind around the fact that that's not where the rest of the world lives.
And, and how amazing what kind of a service you can provide to your users based on that.
So now that we're wrapping our minds around the idea that, hey, we can do some authorization, if you want to do some, like light authorization with the JSON web token, we're like, making sure that, hey, you came from CodePen, you have a valid session.
And now I can start doing work on your behalf.
And you don't have to go all the way to Oregon, if you're say in Australia, and deal with the latency of the slow speed, that's like a really incredible service.
And so we're starting to kind of untangle the CodePen of today and start kind of bringing it out to the edge.
And that's, you know, because we're wrapped up in that world.
And we've been working on this thing for eight years, it's taking a bit of time, but we're, you know, we've got big plans to like, start pushing that out further and further, just because it makes sense.
And the experience on Cloudflare, from what I understand, it takes about 30 seconds to get to, you know, as far as I'm concerned, it takes 30 milliseconds to get to Oregon.
So from my perspective, it's near instant.
And it's, it's been really great so far. So that's kind of like where we're, where we're at with that.
You got to think it through, like the architecture of them.
That's not straightforward. It's not like pick up some code and move it over.
Sometimes you got to think of what's going to work and what's not.
I, it occurred to me this, like we have one function for exporting. That's like, that's an interesting one, isn't it?
Like, I remember when we were architecting it, we went through a number of versions.
One of them would like pick up a bunch of files, send them across the network to the function, and it would ball them up as a zip and either save it to its temporary storage or to a bucket or something, and then give you a link to it real quick so that you could download it in the browser.
And then we're like, well, that was, it was kind of like a little heavy network for the client.
So instead we, we had sent a request to the, to the function and the function would then ask for the files itself from our like GraphQL API.
And then it would get them back and it would ball them up into a zip and download them.
And it was just a different architecture, you know, like have the function ask for the files or just give them the files.
And so like, I don't know, like what, what makes more sense than a worker?
Well, when it's at the edge, it almost feels like if it just had the files to work with to begin with, maybe that's faster than having it have to come all the way back and get the files.
I don't know. You just got to think that stuff out and test it.
It's not always straight. Yeah. I mean, especially for user interactions, you definitely want to think about what's the way that I can get this to run where the user is without making that round trip.
And yeah, what does the state look like in that moment? Yeah, it's, it's interesting because for us, you know, there, you have that dichotomy of like, we could put more power on the client and we could, you know, send more code and execute more on the client, but then you're overloading the client for simple things.
Maybe they're just there to read, you know, like Chris said, a lot of people come to CodePen just to learn.
They're not necessarily creating, you know, and most people kind of come there to read and see what people are creating and learn from them.
So we don't want to kill that experience. We don't want to burden it with a bunch of code that also processes and executes and things like that.
So we keep, I'm starting to wrap my mind around, around the idea that maybe like workers are actually, there's really good intermediate solution to not overburden the browser with a bunch of code that they might not use, but still give you that really extremely fast experience.
And I know it's not offline, but it's, it's, it's really highly available.
It's resilient to downtime. Because you write them in JavaScript, a lot of these functions, they could be a service worker, but then it's like, I really want to deal with service workers.
I mean, they're so cool, but they're like the, they're, that's tricky stuff.
You know, we have not gone down that road just yet.
I feel it almost feels like a worker's like easier and safer.
It certainly is. It's a lot easier on, on the mind to understand, okay, I control this environment.
Yeah. I know what I'm getting. It's a new version of it, or as a service worker, you got to like, make sure it's de-cached properly.
Yeah.
Yeah. I think that control of it is really yeah, definitely something that we've heard from customers many times is they get into this dilemma of, do I put this code on the client?
And yeah, then it's really fast, but I lose my control over it.
And how do I make sure that it's updated? And on the flip side at the origin, like you said, if you live in Oregon or Virginia, things are smooth and skippy, but otherwise yeah, for the rest of the world, it might not be the greatest experience.
I think the really interesting thing about the edge is like, how close can you get the code to the client without it being literally on the client?
It's been big.
I mean, these screenshot ideas now, you know, for the first time for Rach, poor Rach in Australia, she's our super developer on our team, always had a slow life there.
You know, she's probably used to it in Australia, but now we can have her in mind at every single change we make, it's closer to her, you know.
I think that's really beautiful that, yeah, you know, I mean, when you travel abroad, even you realize like, oh, things are slower here.
And that's just how most people experience the Internet.
And I think it's really cool that this new generation of applications can be built to be fast and accessible to everyone.
Right. Yeah.
And I think that's one of the game changers about workers that I'm really excited about is for a long time, I've thought about, well, you know, we're not charged for lambdas unless you're executing them.
So what difference would it make if I deployed them to, you know, let's say 12 regions and just based on latency did routing and all this, you know, stuff, and that would be better, that that would be better, but it's more complex for us to manage.
And the thing about workers is like, there's no deploying to 12 regions, like, even if we could do that, and it was, we'd still have to churn through, make sure that everyone's updated, manage, all of a sudden you're managing five data centers or 12 data centers with VPCs or whatever, you know, it's like being able to push out to however many points of presence you guys have, and not caring, it lets us just focus on what we're doing.
And so that is where like, you start to see us push more logic out to the edge because of that simplicity.
It's like, that's kind of a key component of it is like, it has to be manageable for our brains to be able to continue to understand our own system.
You know, we're a small company, we're seven people.
And so being able to manage that and not get overwhelmed by all the infrastructure and network, the layout of the network is huge.
I can write these things and I am like, not, I'm a developer, but very, very, very front end focused on our team.
And so to know that, you know, in my dev experience for these workers, I'm working on them.
They're perfect. I run the Wrangler. It fires up a little tester thing.
It's a URL. What do you have a tattoo? Like a little Chrome.
I mean, to know that like the code is running in like a super sky Chrome is already kind of cool, but then you got the dev tools down there.
And then the code that I'm writing is node.
So like as a front end developer who writes JavaScript, this is really comfortable.
Like, you know, there's hardly anything to learn, you know, it's just like, Yeah, that's, that was the beauty of it, that there's hardly anything to learn.
And you can just kind of focus on the job that you're doing that that's really, once you get over the misconceptions and misunderstanding, like I was just telling you earlier, I was really confused about the idea of execution time versus how long this request has to respond.
And so I was like, I remember seeing that I had up to 50 milliseconds.
I was like, guys, seriously, I don't my lambdas don't even wake up in 50 milliseconds.
So I'm just not gonna be I was like, I did not understand the technology.
And then when you realize, I think we run over 200 million workers, you know, executions of workers a month.
It might be more now. And I think the average execution time is like 1.9 milliseconds or something.
And you just kind of need to experience it to understand that it's a different piece of technology.
Fundamentally, it's a different technology.
What you can do with it is a different technology, how you get charged for it is different.
And so that that was my big barrier to entry is like, okay, once I tried, you know, the screenshot idea, and I was like, I saw that it worked.
And I saw the timing on it, you realize, okay, that I can start to see that I can do real work on this worker.
And so, you know, we're just kind of getting more invested as time goes, but it kind of took getting over that hump for us to be like, okay, I can see how it goes.
And like Chris says, like, we're all almost everyone on the team is now touching the workers.
I mean, if they haven't, they all have the skills to do so, which is really awesome.
And as you kind of like mentally conceptualize that all our traffic goes through Cloudflare anyway.
So if you want worker to touch a little piece of it, then you just can like most of it doesn't.
All of it goes through Cloudflare, but most of it doesn't necessarily touch a worker because it just doesn't need to.
Workers are scoped to a URL pattern.
And but we use it more and more. We're like, what if on this particular page, a worker got involved?
Yeah, I know that flew by. So thank you both so much for joining me.
It was really great to hear about everything that you're building and your experiences as well.
Yeah, thanks. Thanks for having us, Rita. Really appreciate it.