Using Cloudflare Workers with the Backblaze B2, S3-compatible API
Presented by: Tim Obezuk
Originally aired on August 28, 2021 @ 10:30 PM - 11:00 PM EDT
Learn how to use Cloudflare Workers alongside Backblaze B2 — its S3-compatible API.
English
Workers
Storage
APIs
Transcript (Beta)
Hello, Cloudflare TV viewers. Thanks for joining my session today. We're going to change the tone a little bit away from the legal content and for the next 30 minutes you're going to be stuck with me and we're going to be talking about a couple of my favorite things.
We're going to be talking about writing code and saving money and how we can do that using Cloudflare workers and one of the Bandwidth Alliance partners Backblaze v2.
So a bit about myself. My name is Tim. I'm in Australia and I'm really excited to be talking today and working at Cloudflare because I feel like I have one of the best jobs in the world working on the solutions engineering team where we help all of Cloudflare's enterprise customers solve problems and make the most of their Cloudflare usage.
So today I'm going to talk about Cloudflare Workers and the Backblaze v2 API.
It's gonna be a bit of a technical talk which means we're going to be looking at a bit of code.
We're not going to really be writing code but we'll be using code and we're going to be using cloud storage as well.
So for anyone who's unaware about the Bandwidth Alliance, Bandwidth Alliance is Cloudflare's partnership with a range of different hosting providers and it covers an enormous number of different providers including DigitalOcean, Azure, GCP but one we're going to be talking about today is Backblaze.
The whole premise and the idea behind the Bandwidth Alliance is to reduce the cost of egress fees.
A lot of people think a lot about the cost of storing data but they forget about the first mile costs of getting the data from their storage provider through to their content delivery network and this is where the Bandwidth Alliance comes in because we know that the cost of us running a small bit of fiber between one storage provider to our network is quite small and we want to pass those savings on to you using the Cloudflare platform.
So today we're going to look at how we can combine one of the Bandwidth Alliance partners, Backblaze, with Cloudflare Workers which is our serverless compute platform.
For anyone who's unaware of workers, Cloudflare Workers is essentially where you can write code, a small piece of JavaScript code, and deploy that automatically to Cloudflare's global network which as it is today is in 200 cities in 90 countries around the world.
The great thing about writing this code is you don't need to think about what to do if you want to expand to a new city or a new region.
As Cloudflare grows its network and deploys more infrastructure your code automatically gets served closer to the users and delivers a really fast performing application.
Everyone can sign up for workers.dev for free and get their own mydomain.workers.dev subdomain to play with and you can get a hundred thousand free requests to use every single day.
The reason we're talking about Cloudflare workers and Backblaze today is because quite recently Backblaze introduced support for a lesser-known an API that's quite common across the storage industry which is the S3 API.
If you're curious about learning more about this in detail I'd recommend having a look at the Cloudflare blog where we recently blogged about the history of the S3 API and some of the challenges that different businesses run into migrating storage providers.
Backblaze announcing support for the S3 API was really exciting because it meant that if you were previously using the storage provider like AWS S3 and you wanted to migrate to a more cost effective alternative you previously had to change the way your applications programs in order to interact with the new storage provider and that leads you to run into a number of both engineering costs and also just costs for making the transition.
So Backblaze announcing the S3 compatible API enabled a raft of legacy applications and legacy code bases to automatically support the storage and this is really exciting because Backblaze is one of the cheapest storage providers that you can use.
It works out to be about five dollars a terabyte to store content which is one of the cheapest on the market.
So today we're going to look at how to integrate these two things together.
The Backblaze API, the S3 API is actually really easy to use with Cloudflare workers.
Typically when you're using Cloudflare Workers you'll want to sign an API request.
So today we are going to be using the AWS forFetch library which is a open source library that's really light and really easy to integrate into your Cloudflare Workers environments.
I'm just going to switch over to my editor here.
So today we're going to be using Wrangler.
So part of the Cloudflare Workers SDK that you can use includes a command-line tool called Wrangler.
This Wrangler generate function.
So today we're going to be using Wrangler to host a very basic static website using both Backblaze v2 as a storage provider and Amazon and Cloudflare workers as the web server to set the content.
So if I go into Backblaze v2 I can show the kind of content I've got available.
Great thing about Backblaze is you can sign up for free and get 10 gigabytes worth of storage out the box.
So if you're a new developer wanting to experiment with this you can use both Cloudflare Workers and Backblaze v2 for free just to get started an experiment.
I've created my own Backblaze v2 storage bucket.
So I've created one here. It's really easy to create one.
You can go into their website create your account. Then just head into their v2 section and create a new bucket.
It's really straightforward to create one.
The one I created is one I've already pre-rolled but you can create and call it anything you'd like and just create as a private bucket.
Today we're going to be showing some content publicly but it's okay for it to be marked as private because Cloudflare Workers is going to securely sign requests to this content which will give us much more control over the way we're delivering it.
If I look inside my bucket the great thing about Backblaze is there's a web UI where you can go in and look at the kind of content that you've got.
So I'm going to jump in here and I can see I've got a bunch of files stored inside my Backblaze bucket.
These are secure. No one else can get into it without my credentials and I can store anything I'd like here.
So I've put in a index HTML file and a couple of media a couple of pieces of media content and that is that is the same content that we've got here in this public HTML.
All of this has already been uploaded into Backblaze v2.
So how am I going to get this from Backblaze v2 into a functional website?
So I've already uploaded this and created the web server and I've got this website here which is entirely in Cloudflare Workers.
It's running live in Cloudflare Workers. Cloudflare Workers is actually signing a request every single time I go to a URL here it's signing a request to this and I'll show you what this worker script looks like.
It's really quick and easy to implement to connect Cloudflare Workers to v2.
So as I mentioned before we're using Wrangler which is the command-line SDK and to get started we're going to use the Wrangler generate command.
There's a range of different templates you can use for Cloudflare Wrangler and I'm going to use one that I pre-created called the worker signed S3 template.
So now that I've created it I can see I've got a whole bunch of files created in my editor which I can use to manipulate and modify how I'm connecting to my storage provider.
The great thing about using an S3 compatible API like what v2 provides is it means your code is portable.
You can write code for one storage provider and then use it for another storage provider and move it around really easily.
So we've got this logic here which was originally built for interacting directly with AWS S3 but today we're going to adapt it to work for v2's API and the S3 compatible API.
As I mentioned before we're using the AWS for fetch library and this is a really lightweight library that you can use inside Cloudflare Workers in order to generate signed URLs in order to securely fetch the content from the storage provider.
A lot of people when they first start using workers they often try bundling the native AWS S3 API directly into Cloudflare Workers but they find that that library is quite heavyweight and it leaves you with a very large bundled worker script.
This file is only 18 lines long but if you bundle an external library it can get up to tens of thousands of lines long and many many megabytes in size.
So using a lightweight library is great in the Cloudflare Workers environment because when you run a Cloudflare worker you're actually deploying this code to Cloudflare's global network all over the world.
You don't want to be deploying very large files across the entire globe and that's why we chose to use the service workers framework in order to make them really lightweight, really fast and not deal with any sort of cold start times.
So this is the Cloudflare worker that we're going to work with.
It's a very basic script.
The way Cloudflare Workers function is they operate using the service workers API.
So if you're a traditional JavaScript front-end type developer this might look really familiar to you.
The only real difference is we've taken the service workers functionality and brought it to Cloudflare's global edge.
So we're going to include the AWS library and we're going to configure this with some environment variables.
These variables will be automatically populated by Cloudflare when the code runs live and I'll show you in a second about how we include those variables.
Every time a request is made we're going to figure out the URL, modify it to use the host name for our target bucket which I'll show you in a second.
Now we're going to generate a signed request using the S3 signature version 4 style pattern of generating signed URLs.
And lastly we're going to fetch this request and return it back to the eyeball so that they can get the content.
This is really powerful because it means Cloudflare Workers can securely request the content and return it publicly to the user but you know that only Cloudflare is able and only your code is able to communicate with the storage bucket.
And coming back to saving costs, if you want to save you don't want someone to be able to directly fetch content from your storage bucket because that defeats the purpose of being able to save money on the content delivery network by using a content delivery network.
So this allows you to put your own logic in between the storage providers and ensure that it's securely delivered.
So I mentioned before we've got these variables here and we need to manipulate these too.
We need to inject the content that we have for our bucket that we've created.
Before I was showing you in Backblaze, we've got our bucket here so we've got a few parameters that we'll use to populate our configuration.
Within Wrangler you can inject a wrangler .toml file.
This is used to populate environment specific variables as well as configure your project to deploy to particular routes.
So we can see here I've configured mine to point this worker script to everything for my particular subdomain but I'm also injecting a couple of environment variables here.
First one is I need my S3 endpoint. So the S3 endpoint when you're using B2 is the endpoint that they provide here as you can see but you also want to do dot your bucket name dot the S3 endpoint.
That will be your bucket host.
Then you also want your AWS or your B2 secret key and your access key.
To do that you'll go into your app keys and generate a new key in here. So you create a new key, you might scope it to a particular bucket.
Say maybe you only want this to read, right?
That's a great way to apply security to these keys because it doesn't really make sense for Cloudflare to have write access to the storage bucket.
However if you are building a function where you might want users to upload directly to your storage bucket it's entirely possible that you do want to give it read and write keys because the worker might upload content directly to your bucket so you don't have to have them upload to an intermediate web server.
So you get your key and then you'll end up copying it in here.
You'll define these variables.
Once they're defined this is where you can start uploading the content. One of the things to keep in mind is while I have copied and pasted my environment credentials here it is possible to add keys to Cloudflare as secrets so you don't necessarily need to include them in your source code in any way in order to use them.
Wrangler is a really great command-line tool for managing your work environments.
So if you're the kind of developer who never actually wants to log into a web UI and press any of the buttons you can do everything entirely on the command line.
I've already done this but one of the first things you would do is go into Wrangler.
When you do that it will take you down a path to generate your own Cloudflare API token and include that in your local Wrangler configuration.
Once you've done that you'll be able to run some functions like deploying the script to Cloudflare's network.
When you're experimenting with code you just move into my directory here.
When you're experimenting with code it often makes sense to want to preview it.
You can use Wrangler preview to open up a web browser with your worker code all bundled for you to experiment here.
One of the things to keep in mind as well is this particular code because it is bundling an external library you aren't going to be able to just copy this code directly into Cloudflare Workers and run it.
You actually need to run a bundle step in order to combine the third-party S3 signing library into your Cloudflare worker and then that final bundled scripts can be uploaded into the edge and it functions in one combined package.
I'll show you a quick example for what that looks like.
There's the final bundled script that's all minified and prepared for being uploaded to the edge.
That includes both your code logic and also the S3 library.
We've got our scripts and we want to deploy it.
I'm going to manipulate this a little bit.
I'm going to be a bit brave and I'm going to deploy some serverless code.
The way we do that is with Wrangler. We're going to use Wrangler publish-env -dev because when we looked at my Wrangler tool file before we saw that we were using development environment variables.
So Wrangler is doing a few things here.
Wrangler is making sure it's all up-to -date.
It's also bundling our code and then it's uploading it and deploying it to the Cloudflare edge.
Now that this is done I can go here and I can see I see hello world and that's exactly what I expected because I have this worker code returning just a response that says hello world.
It's not very fun. If I comment that out and redeploy it.
This is where ourselves can be a bit brave deploying code live on TV.
We can see it's deployed again. It will come through. So now instead of running the code, instead of the code just returning a response directly to the eyeball.
It is going to actually proxy the request and sign a request back to Backblaze B2.
Fetch the content, Cloudflare's global content delivery network and then serve it to the eyeball who doesn't need credentials at all.
This is a, just so anyone who's not aware, if you're looking for a video for testing, this is a Big Buck Bunny which is a creative comments slot video that you can use.
So anyone who's new to Cloudflare, one of the things I'd really recommend is adding custom response headers to your to your Chrome developer tools.
Here we can see the CFA and also see if cache status.
You can see these requests going through my closest point of presence in the cache status that came through.
So we've written some code, we've signed some content and we've pulled some content from Backblaze B2.
We also deployed it globally to 200 data centers around the world in just a matter of seconds.
One of the great things about using Cloudflare Workers is if you're trying to build some advanced logic to control who should see your content, Cloudflare Workers is a place where you can do this.
Let's say you wanted to also validate that the use of requesting your content should get this content.
You can do that by adding your own custom logic in here to say if the user has a cookie, you could return a response, you could allow them to return the content.
But if they're not authorized, you could return a custom response to say please go away and come back when you have a authenticated request.
So there really is a lot of power and control over what you can do here.
So that was all I wanted to show today.
I have one question that I see has come through which I'll answer.
Great.
So the question is, if I wanted to have multiple redundant S3 storage providers, could I load balance across more than one of them using Workers for the purposes of having high availability and tolerance?
Is there a better way to get performance by using whichever back S3 storage provider is closest to users?
And yes.
One of the great things about Cloudflare Workers is you're not limited to just doing one request.
In fact, a worker can do multiple requests. You can have a worker establish a request to multiple different storage providers and whichever one returns a successful response first, it could serve the content from that particular bucket.
This is a really great way to race the clouds and make sure that if you're going to pull content from a storage provider, the one that's the fastest is able to do it.
Right.
Those are all the questions we had today. So thank you everyone for watching. It's been a lot of fun and have fun playing with Workers.
The release of workers sites makes it super easy to deploy static applications to Cloudflare Workers.
In this example, I'll use create react app to quickly deploy a react application to Cloudflare Workers.
To start, I'll run npx create react app, passing in the name of my project.
Here, I'll call it my react app. Once create react app has finished setting up my project, we can go in the folder and run wrangler init dash dash site.
This will set up some sane defaults that we can use to get started deploying our react app.
wrangler.toml, which we'll get to in a second, represents the configuration for my project, and workers site is the default code needed to run it on the workers platform.
If you're interested, you can look in the workers site folder to understand how it works.
But for now, we'll just use the default configuration.
For now, I'll open up wrangler.toml and paste in a couple configuration keys.
I'll need my Cloudflare account ID to indicate to wrangler where I actually want to deploy my application.
So in the Cloudflare UI, I'll go to my account, go to workers.
And on the sidebar, I'll scroll down and find my account ID here and copy it to my clipboard.
Back in my wrangler.toml, I'll paste in my account ID.
And bucket is the location that my project will be built out to.
With create react app, this is the build folder. Once I've set those up, I'll save the file and run npm build.
Create react app will build my project in just a couple seconds.
And once it's done, I'm ready to deploy my project to Cloudflare Workers.
I'll run wrangler publish, which will take my project, build it and upload all of the static assets to workers KV, as well as the necessary script to serve those assets from KV to my users.
Opening up my new project in the browser, you can see that my react app is available at my workers dot dev domain.
And with a couple minutes and just a brief amount of config, we've deployed an application that's automatically cached on Cloudflare servers, so it stays super fast.
If you're interested in learning more about worker sites, make sure to check out our docs where we've added a new tutorial to go along with this video, as well as an entire new workers site section to help you learn how to deploy other applications to Cloudflare Workers.
Zendesk is one of the world's premier customer service companies, providing its software suite to over 125,000 businesses around the globe.
My name is Jason Smale. I'm the vice president of engineering at Zendesk.
My name is Andrei Balkanashvili. I'm a technical lead in the foundation edge team at Zendesk.
Zendesk is a customer support platform that builds beautifully simple software for companies to have a better relationship with with their own customers.
We have over 125,000 businesses around the world, all using Zendesk.
And then within those businesses, there's hundreds of people whose day job is to sit in front of Zendesk and use Zendesk.
For Zendesk, security is paramount. And when it came to safeguarding its network, Zendesk turned to Cloudflare.
Web security is very important to our business.
Our customers trust us with their information and their customers' information.
So we need to make sure that their information is safe, secure.
The initial need for Cloudflare came back a couple of years ago, when we suddenly started to see a lot of attacks coming towards us.
And all of a sudden, we'd get thousands of requests, hundreds of thousands, you know, like millions of requests coming at us from all over the place.
So we needed a way to be able to control what came into our infrastructure.
And Cloudflare were the only ones that could meet our requirements.
It's been really impressive to see how Cloudflare's DDoS mitigation continues to evolve and morph.
And it's definitely the best DDoS mitigation we've ever had.
I think Cloudflare just gets you that, and so much more.
And you don't have to pick and choose and layer on all these different providers, because it's just one.
And they're great at all of those things. It's easy.
It's a no-brainer. By tapping into Cloudflare's unique integrated security protection and performance acceleration, Zendesk has been able to leverage Cloudflare's global platform to enhance its experience for all of its customers.
Cloudflare is providing an incredible service to the world right now, because there's no other competitors who are close.
Cloudflare is our outer edge. It makes our application faster, more reliable, and allows us to respond with confidence to traffic spikes and make our customers happier.
Zendesk is all about building the best customer experiences.
And Cloudflare helps us do that. With customers like Zendesk, and over 10 million other domains that trust Cloudflare with their security and performance, we're making the Internet fast, secure, and reliable for everyone.
Cloudflare. Helping build a better Internet.