Dogfooding Workers at Cloudflare
Presented by: Rita Kozlov
Originally aired on December 6, 2021 @ 3:00 PM - 3:30 PM EST
Best of: Cloudflare Connect 2019 - UK
Rita Kozlov, Product Manager for Cloudflare Workers, shares how Cloudflare leverages the Cloudflare Workers serverless development platform internally, for a broad range of use cases.
English
Cloudflare Connect
Transcript (Beta)
This episode is presented by Rita Kozlov Hello, everyone.
So today I'll be talking to you guys about dogfooding workers at Cloudflare.
My name is Rita Kozlov. I'm the product manager for workers.
So Cloudflare Workers, Stefan did a very nice intro for me.
And if you guys were here for the earlier sessions with JGC, you might already be familiar with it.
But Cloudflare Workers is Cloudflare's serverless platform offering.
So we've taken our edge and our growing network of 180 data centers.
And we figured out for you guys how you can run code out of every single one of them.
But this is a lot of new concepts for a lot of people, right? Serverless is a relatively new paradigm.
And it's the concept of developers not having to think about the servers where applications are running.
But instead of thinking about the code.
And this is fairly new because just not too long ago we were thinking about virtual machines and containers.
Serverless at global scale is even newer, right?
Some of you have wrapped your head around the idea of running code in a single location.
Some more advanced use cases call for load balancing between two origins.
But running code out of 180 data centers around the world is kind of tough to wrap your mind around.
So what I'm very, very often asked.
And I think that was a question that JGC was asked. John Gray Cumming was asked earlier on stage today as well.
How do I get started with Cloudflare Workers? Whenever I think about this question, I think about Gaul's Law.
Which is the idea that any complex system that works was invariably developed out of a very simple system that worked.
People are very bad at designing complex systems out of the get -go.
Getting a lot of parts of a complex machinery right is very, very difficult. So what you have to do is start simple, right?
Start from components that work and then grow from there on out.
So today I'm going to tell you guys the story of dogfooding at Cloudflare with Cloudflare workers.
And how we've learned to adopt workers step-by-step at Cloudflare.
For those of you who aren't familiar with the concept of dogfooding.
Dogfooding is the idea of consuming your own products. So rather than there being a version of the product that we use or human food.
And another product that our customers use, dogfood.
We get to consume our own products and feel the pain of our customers and develop empathy for them.
So just like we didn't send you guys to a separate line during lunch today.
We ate the same food as you guys.
We do the same thing for our products. So Cloudflare journey into workers also started in a few different pieces.
Where we started working on workers for some pretty simple use cases.
But over time they developed. And from each one we learned more and more to apply to the next.
And I'm hoping you guys will take some of our learnings with you today.
And think about how you can apply them within your own organization.
So I'll start with the first use case.
This was basically the very first worker that started running in production for Cloudflare.
For deprecating old TLS. So we had this requirement.
A lot of you guys are familiar with PCI. It's a standard for anyone who accepts payments on the web.
And old versions of TLS served us really, really well.
But at a certain point we decided that they were vulnerable and not secure enough.
So it became a requirement for us to be able to accept payments from you guys.
To only support versions of TLS that were 1.2 or above. The challenge for us is that Cloudflare offers many different services.
If you've logged into our dashboard before.
You've probably seen all of these icons. Each service is managed by an entirely different team.
And each team has full flexibility to run whatever tech stack that they want.
So what we end up with are lots and lots of services.
All running their own unique different tech stacks. So one way that we could have gone around solving this problem.
Is gone to each and every single one of these teams.
And gone, hey we're sorry but you have to put your current project on the backlog.
This is a high priority for us for PCI compliance. You need to upgrade your tech stack to decline older versions of TLS.
This of course would have been very challenging.
And required someone to constantly monitor and check in.
And it would have impeded our productivity. Since we would have been focusing on this.
Instead of building new features. The nice thing about workers. Is that workers sit between the eyeball and your origin.
Or in our case origins plural right.
So we're able to set up a proxy. And disable the old version of TLS at the worker level.
So instead of doing it at every single stage. We could do it just once at api.Cloudflare.com.
And setting up a worker there. The worker itself is actually incredibly simple.
This is the whole worker. This is not a snippet of a worker.
So I'll walk you guys through it pretty quickly. We have the event listener that listens on a fetch.
So any HTTP request that comes in. We take a look at it and we respond with SSL block function.
We look at the TLS version that's passed in the request.cf feature.
And if it's not 1.2 or 1.3. We respond with a 403. Otherwise we fetch the request from the origin.
And continue business as usual. This is all of like eight lines of code without comments.
So this is super, super simple.
Any of you guys can do this. And it got us through the first hump of writing a worker.
So this was a great use case for us. It helped unify many different services.
And we took no performance hit for it. Because we already had Cloudflare in front of our services.
And because Cloudflare runs at 180 data centers around the world.
If anything, it can actually accelerate the connection. If you're using things like Argo to route to your origins.
The next case study that I'm going to talk about is access on workers.
So Cloudflare access is one of our products.
Can I get a quick show of hands for how many of you are familiar with access.
Or have used access. Nice. So quite a few of you. So Cloudflare, we practically don't use VPN anymore.
Access is our gateway into most of our internal applications.
So every day when I go to the wiki. This is what it will look like for me.
Or something like this. Where I'll type in wiki.Cloudflare. Our internal URL. And it will take me to the sign in page through Google.
I'll select my account. And then I'm able to access the wiki.
So I'm going to dive a little bit deeper into what it looks like behind the scenes.
Step by step. So the first URL that I'm going to hit is wiki.mycompany.net.
And that will issue a 302 to the auth domain that you've selected during the sign up process.
Slash login. The login will then respond to you with a 200.
And a link to your identity provider. So let's say you choose Google.
You're then going to go to Google. Which will then redirect you yet again to the auth domain.
This time to the callback path. So now we're on the callback path.
And now we're finally able to get the auth token back from Cloudflare.
And set it as a cookie. So that we can finally access the website that we're going to in the first place.
The challenge here is that just for the login flow.
We have to go back and forth between our core service. And the user three different times.
So I just flew in from San Francisco yesterday. It took a long time.
Making that round trip three times. As you guys can imagine. Takes quite a bit of lag.
So what we're able to do with workers.
Is when we're all here in London. We can actually connect to the London pop that's here.
And from the edge we can generate the token.
And start using the website right away. So these three very long round trips.
Have all of a sudden become three incredibly short round trips. That are literally within milliseconds of us currently.
Now the only bottleneck that we have.
Is the identity provider. And if you're using Google. They're probably in quite a few locations as well.
One of the other options that you have. When you're using access.
Is to use a single time nonce. So that was another source of bottleneck for us.
Because previously to get that nonce generated. Yet again you would go back to one of our core data centers.
Which are not the same as the edge data centers.
Right? They're just inevitably fewer of them. So what we were able to do.
Is actually yet again use workers. To generate the nonce on the fly. And store it in KV instead.
Just to give you guys some insight. Into how we've operated.
And worked on this iteratively too. Originally we used the cache API. Which is also a key value store.
But it's ephemeral. And it allows you to store in the location that you've accessed.
The reason that this approach was originally problematic for us.
Is because if you're traveling. You might hit one pop one time. And then another time you might connect to.
Instead of the London pop. Let's say the Frankfurt pop.
Right? So by using workers KV. Which is our distributed key value store.
We're able to propagate that nonce. To all of our data centers. And you're able to access it from any location.
So one of the approaches that the access team took.
Is to split out the logic for each of the endpoints. So each endpoint gets its own worker.
This means lower risk deployments. Because you can operate on different parts.
Without necessarily impacting the rest of the service.
And it allowed individuals to work on different parts of the service. And kind of split out the work.
Without having to do all of this at once. The other outstanding question.
As the access team was doing this. Was what do we do about logging?
So you can use the wait until function. Which will basically call out asynchronously.
While your workers serve the response. So that way the end user is not having to wait.
On you to send the log line somewhere. While they're authenticating.
So we're able to use this for audit logs. Which if you log in to your account in access.
You'll be able to see everyone who's used the service. And we're able to use it for our own debugging as well.
By logging into our sentry logs. So the result that we saw was improved performance.
Right? Because we're not making these insane round trips all the time.
Improved reliability by having fewer points of failure.
No matter how many core data centers we get. 180 data centers is obviously going to be much more resilient.
And we're able to try different approaches.
And quickly iterate. The third case study that I'm going to talk through.
Is building the reservation system for workers.dev.
So far we had a use case. Where we augmented traffic. We had a use case where we enhanced the traffic.
But now we're going to build something new.
So I'm not sure how many of you guys have seen this page before. But in February we pre-announced the service.
To sign up and register for subdomains .workers.dev.
We had a few pretty simple requirements. You put in your subdomain. You put in your email.
You click reserve. And you expect it to work. We wanted to originally limit the reservations.
To one per email address. So we didn't want someone to take everyone's fun.
And steal every first name out there to themselves. The other requirement is that.
Only one person is allowed to grab a subdomain. We didn't want any unpleasant surprises.
Where two users were guaranteed the same thing.
But in the end only one of them was able to have it. And the other thing that was really important to us.
Was the ability to blacklist a few key subdomains. So we didn't want this to be used for phishing.
And we needed to be able to blacklist.
Things like admin.workers.dev or ssh. Some of the challenges that we're facing.
As we're thinking about this. We didn't know how many signups we were going to get.
Or what the traffic spike was going to look like. With services like this.
It's often a bit of a grab bag. And everyone's trying to register. The other thing was.
We knew that this was going to be a temporary service. Up until we launched.
The actual .workers.dev service. And so we didn't want something cumbersome.
Or that we would have to maintain long term. The other thing was. We wanted to launch this within a month.
And so we needed to get something working. And up and running very, very quickly.
So again. The flow looks like this. You reserve a subdomain.
You receive an email. You need to confirm it. And then once it's confirmed.
It's been reserved. So that you can later claim it. We used two workers for this.
One to reserve the actual subdomains. And the other to validate your email.
So the workers to reserve the subdomains look like this. When you clicked on reserve.
You hit a worker that was running on workers.dev. Slash reserve. And we then used Firestore to use the actual reservations.
As a quick note. We thought about using KV versus Firestore.
We already have an offering for this. And because of the eventual consistency nature of KV.
It does allow for the possibility. That if I'm in London.
And someone else is in San Francisco. And we both register the same subdomain.
At the same time. That we'll both be guaranteed it originally. But in the end.
Obviously only one of us gets it. So we needed a centralized data store.
So we chose Firestore for this. And I think this actually really nicely tells the story.
Of workers being able to connect to any property on the Internet. Right?
You can use our products. And we tried to build them out. And make it as easy as possible for you to use them.
But you don't have to. So the worker would have to authenticate to GCP.
Because we didn't want anyone else to write or read from it.
It needed to check to make sure the subdomain wasn't taken. That it wasn't blacklisted.
And then save the reservation. You can find the worker that we used for the GCP authentication part.
In this blog post that I've referenced. But as you can see.
It's pretty easy for us to assemble the data that we have into JSON.
And then we use the Node-Jose library. To actually create the JWT token. The second worker that we used was the worker to verify emails.
So pretty similar setup.
We had to generate a verification token. And store it alongside the email in Firestore.
We then used the Mandrill API to send the email. When you clicked on the email in the URL.
It would then take you to this link that would confirm the token. Against the email that matched up.
And send it to a worker to confirm your registration.
And then we would hold it for you until you're ready. So this led to a successful launch.
It was seamlessly scalable. So whether we were building this to serve a hundred users.
Or a million users. The code would look exactly the same.
We got no double bookings. We don't need to maintain it anymore. Now that we've launched.
This is something that was perfect for a single use. And it can be used with any APIs or cloud providers.
So we used the Mandrill API for the email. And Firestore for storing the records.
So what did we learn from this experience. Of using workers for these three cases.
As I mentioned earlier. Start small and move functionality to the edge deliberately.
I see customers try these big migration projects.
And it's really, really hard to keep the steam going. You need that little bit of dopamine along the way.
To make sure that you're making progress. But you also need to be able to learn quickly.
And apply what you've learned to the next iteration.
The other thing is. Break up endpoints into multiple workers. This will allow you to deploy frequently.
And work in small and independent groups. Workers have allowed us to move very, very quickly.
So we actually use workers a lot. Even outside of these three use cases.
Products such as KV itself. Were actually built with workers.
And it's really allowed our engineering velocity. To go much, much more quickly.
The last thing is. New projects are great for workers. It's obviously easy to fall back to technologies.
That you're familiar with. When you're trying to get things out the door.
But they're also a great opportunity to experiment.
So highly recommend it. Thank you. Thank you. They've become India's largest ticketing platform.
Thanks to its commitment to the customer experience.
And technological innovation. We are primarily a ticketing company.
The numbers are really big. We have more than 60 million customers. Who are registered with us.
We're on 5 billion screen views every month. 200 million tickets over the year.
We think about what is the best for the customer. If we do not handle customers experience well.
Then they are not going to come back again. And BookMyShow is all about providing that experience.
As BookMyShow grew. So did the security threats it faced.
That's when it turned to Cloudflare. From a security point of view.
We use more or less all the products and features that Cloudflare has. Cloudflare today plays the first level of defense for us.
One of the most interesting and aha moments was.
When we actually got a DDoS. And we were seeing traffic burst up to 50 gigabits per second.
50 GB per second. Usually we would go into panic mode and get down time.
But then all we got was an alert. And then we just checked it out.
And then we didn't have to do anything. We just sat there and looked at the traffic peak.
And then being controlled. It just took less than a minute for Cloudflare to kind of start blocking that traffic.
Without Cloudflare we wouldn't have been able to easily manage this.
Because even our data center level.
That's the kind of pipe you know is not easily available. We started for Cloudflare for security.
And I think that was the aha moment. We actually get more sleep now.
Because a lot of the operational overhead is reduced. With the attack safely mitigated.
BookMyShow found more ways to harness Cloudflare. For better security, performance and operational efficiency.
Once we came on board on the platform.
We started seeing the advantage of the other functionalities and features.
It was really really easy to implement HTTP2. When we decided to move towards that.
Cloudflare Workers which is the you know computing at the edge. We can move that business logic that we have written custom for our applications.
At the Cloudflare edge level.
One of the most interesting things we liked about Cloudflare was.
Everything can be done by the API. Which makes almost zero manual work.
That helps my team a lot. Because they don't really have to worry about what they're running.
Because they can see. They can run the test. And then they know they're not going to break anything.
Our teams have been you know able to manage Cloudflare on their own.
For more or less anything and everything. Cloudflare also empowers BookMyShow.
To manage its traffic across a complex. Highly performant global infrastructure.
We are running on not only hybrid. We are running on hybrid and multi cloud strategy.
Cloudflare is the entry point for our customers. Whether it is a cloud in the back end.
Or it is our own data center in the back end. Cloudflare is always the first point of contact.
We do load balancing as well. We have multiple data centers running.
Data center selection happens on Cloudflare. It also gives us fine grain control.
On how much traffic we can push to which data center. Depending upon what you know is happening in that data center.
And what is the capacity of the data center.
We believe that you know our applications. And our data centers should be closest to the customers.
Cloudflare just provides us the right tools to do that.
With Cloudflare. With Cloudflare. BookMyShow has been able to improve its security.
Performance. Reliability. And operational efficiency. With customers like BookMyShow.
And over 20 million other domains that trust Cloudflare.
With their security and performance. We are making the Internet fast.
Secure. And reliable for everyone. Cloudflare. Helping build a better Internet.
Microsoft Mechanics www.microsoft.com www .microsoft.com No one likes being stuck in traffic.
In real life or on the Internet. Apps. APIs. Websites. They all need to be fast to delight customers.
What we need is a modern routing system for the Internet. One that takes current traffic conditions into account.
And makes the highest performing, lowest latency routing decision at any given time.
Cloudflare Argo does just that.
I don't think many people understand what Argo is. And how incredible the performance gains can be.
It's very easy to think that a request just gets routed a certain way on the Internet no matter what.
But that's not the case.
Like there's network congestion all over the place which slows down requests as they traverse the world.
And Cloudflare's Argo is unique in that it is actually polling what is the fastest way to get all across the world.
So when a request comes into Zendesk now it hits Cloudflare's POP.
And then it knows the fastest way to get to our data centers.
There's a lot of advanced machine learning and feedback happening in the background to make sure it's always performing at its best.
But what that means for you, the user, is that enabling it and configuring it is as simple as clicking a button.
Zendesk is all about building the best customer experiences.
And Cloudflare helps us do that. What is a bot?
A bot is a software application that operates on a network. Bots are programmed to automatically perform certain tasks.
Bots can be good or bad.
Good bots conduct useful tasks like indexing content for search engines, detecting copyright infringement, and providing customer service.
Bad bots conduct malicious tasks like generating fraudulent clicks, scraping content, spreading spam, and carrying out cyber attacks.
Whether they're helpful or harmful, most bots are automated to imitate and perform simple human behavior on the web at a much faster rate than an actual human user.
For example, search engines use bots to constantly crawl web pages and index content for search, a process that would take an astronomical amount of time for any human user to execute.
Hi, we're Cloudflare.
We're building one of the world's largest global cloud networks to help make the Internet faster, more secure, and more reliable.
Meet our customer, HubSpot.
They're building software products that transform the way businesses market and sell online.
My name is Carrie Muntz, and I'm the Director of Engineering for the Platform Infrastructure teams here at HubSpot.
Our customers are sales and marketing professionals.
They just need to know that we've got this.
We knew that the way that HubSpot was growing and scaling, we needed to be able to do this without having to hire an army of people to manage this.
That's why HubSpot turned to Cloudflare.
Our job was to make sure that HubSpot, and all of HubSpot's customers, could get the latest encryption quickly and easily.
We were trying to optimize SSL issuance and onboarding for tens of thousands of customer domains.
Previously, because of the difficulties we were having with our old process, we had about 5% of customers SSL enabled.
And with the release of version 68 of Chrome, it became quickly apparent that we needed to get more customers onto HTTPS very quickly to avoid insecure browsing warnings.
With Cloudflare, we just did it, and it was easier than we expected.
Performance is also crucial to HubSpot, which leverages the deep customization and technical capabilities enabled by Cloudflare.
What Cloudflare gives us is a lot of knobs and dials to configure exactly how we want to cache content at the edge.
And that results in a better experience, a faster experience for customers.
Cloudflare actually understands the Internet.
We were able to turn on TLS 1.3 with zero round -trip time with the click of a button.
There's a lot of technology behind that. Another pillar of HubSpot's experience with Cloudflare has been customer support.
The support with Cloudflare is great.
It feels like we're working with another HubSpot team.
They really seem to care. They take things seriously. I've filed cases and gotten responses back in under a minute.
The quality of the responses is just night and day difference.
Cloudflare has been fantastic. It was really an exciting, amazing time to see when you have teams working very closely together, HubSpot on one side and Cloudflare on the other side, on this mission to solve for your customers' problems, which is their businesses.
It really was magic. With customers like HubSpot and over 10 million other domains that trust Cloudflare with their security and performance, we're making the Internet fast, secure, and reliable for everyone.
Cloudflare, helping build a better Internet. Cloudflare Cloudflare Cloudflare Cloudflare Cloudflare Cloudflare Cloudflare Cloudflare Cloudflare Cloudflare Cloudflare Cloudflare Cloudflare Cloudflare Cloudflare Cloudflare Cloudflare Cloudflare Cloudflare Cloudflare Cloudflare Cloudflare Cloudflare Cloudflare Cloudflare Cloudflare