Event-driven applications with Cloudflare queues and Dapr
Join Gift Egwuenu, developer advocate at Cloudflare and Marc Duiker, developer advocate at Diagrid, as they walkthrough how to use Cloudflare Queues and Dapr to build an event-driven application.
Hi, everyone. Welcome to another segment here on Cloudflare TV. I'm Gift Egwuenu, developer advocate at Cloudflare, and with me here I have Marc.
Marc, hi, thanks for joining.
Thank you for inviting me. You're welcome. In this segment, we're going to be shedding light into Cloudflare Qs as well as Dapr.
We've seen developers utilizing our developer platform that includes workers, KVR2, and even now Cloudflare Qs.
Since Cloudflare Qs was released sometime last year, we've seen people using it.
I'm excited to have Marc show us how you can use Dapr and Cloudflare Qs to build event-driven applications.
Marc, do you want to introduce yourself and then share with us?
Thanks. As Gift mentioned, I'm a developer advocate at Diagrid, and we help developers to build and run distributed applications that's based on open-source technologies such as Dapr.
I've started there last January, so still quite fresh there.
It's my job to spread the love about Dapr, the open-source project to build and run distributed applications across cloud and edge.
I think this is a good time now if I share my screen and show some slides to explain Dapr, because that's probably quite new to a lot of viewers here.
Let me share my screen. All right, here we go.
These days, a lot of organizations are building these distributed applications like you can see here.
It's just like a typical e-commerce system where you have a front end and there are some dedicated services for doing email or payment or a shopping cart and checkout and so on and so forth.
Each of these services have a very specific function, and each service is always easy to maintain and easy to upgrade.
That's quite powerful, and that's why all these organizations are doing this.
They're also event-driven, so nothing is really polling or nothing is related to batch work, so it's also quite quick and performant and also very scalable.
But if you start doing these kind of applications really at scale, then it's quite tricky.
There are quite a lot of developer challenges going on there, because how do you do service-to -service invocation, and how do you want to do end-to-end tracing of your calls in case you want to do some debugging?
How do you handle failed calls, and how do you then apply some retry mechanism on top of that?
There are a lot of cross-cutting concerns that apply to all of these services, and it's very useful then if you can use something to actually take care of that.
So one of the things that you can use is then DAPR, and DAPR stands for the Distributed Application Runtime.
And yeah, it's a portable event-driven runtime for building distributed applications across cloud and edge, and yeah, DAPR runs everywhere where you can run basically Kubernetes and all containers, right?
So it runs across many of the cloud providers or just your own virtual or physical machines even.
What I really like about DAPR is it really speeds up the microservice development, and because it offers some building block APIs, and this is a list of all the building blocks.
So with DAPR, you can do like service-to -service invocation, you can handle your state management, you can do publish and subscribe, you can have input-output bindings, and in this demo, we'll do soon, we'll use the Cloudflare queues output binding.
You have observability built in, you can use the actor model, you can use like different secret stores, use configuration stores, and these days, these are the two latest newest ones.
You can even apply some architectural patterns such as distributed lock or long-running workflows.
So it's quite a broad spectrum that DAPR offers, and it's really focused for developers building distributed applications.
So how does DAPR actually work?
Well, it's built on a sidecore model, so if you're familiar with containers and Kubernetes, you probably know about this, but if you don't, DAPR runs in a separate process next to your application, and you communicate from your application with the DAPR API.
And here are just some examples that use the HTTP protocol, so in case you want to invoke a method on your application, you can use this invoke command, or in case you want to retrieve some state, or want to publish a message, or retrieve a secret from a vault, or start a workflow.
So these are typical things you can do with DAPR.
And you can use any language that supports HTTP or gRPC, so that's probably a lot of programming languages that you can use, but it's even easier if you use one of our client SDKs.
So DAPR is a CNCF project, it's an incubation maturity level, and it's actually the 10th largest CNCF project, which is I think quite an achievement, because I think there are over like 200 CNCF projects.
It's also doing quite well in the community, so it's quite popular with a lot different contributors, and quite a big Discord community as well, so in case you want to know more, or want some help, Discord is the place to be.
And there are a lot of contributing organizations for DAPR, so although it originated from Microsoft, these days a lot of big organizations are actually making contributions to DAPR.
And there's also like a very big number of organizations that are using DAPR.
So talking about these building block APIs, so from your code you call these APIs, and you can see these building block APIs as sort of interface, but for instance when you want to save some state, or want to retrieve some state, there needs to be an actual database or key value store behind that API.
So those are called components, and there are literally dozens and dozens of components.
So for instance, like an example for the state management one, we have things for AWS, Azure, GCP, but there's also one for Cloudflare KV.
So the same goes for all of the different APIs, there are really a lot of components for all of these, and in this session we're going to talk about this binding for Cloudflare queues.
So just final few slides, just to explain how it works.
So from your application, and you want to, for instance, store something in Cloudflare KV, so normally you would use the Cloudflare API, right, like a packaging application, but in this case we're not doing that, because we are using the DAPR sidecar.
So your app in this case doesn't know anything about Cloudflare, it only knows about a component name and that's it, but then DAPR needs to be aware of that it needs to save the state in Cloudflare KV.
So how does it work? Well, there's actually a YAML file that contains the configuration.
So when your application is loaded and your sidecar is loaded, then DAPR knows whenever I see a component name called my storage, that means I'm using a specific storage type called state.Cloudflare .workers.kv, and that's the way DAPR knows where to connect the storage with.
So this model makes it actually very easy to switch the underlying storage.
So what you actually can do is provide a different YAML file and point it to a completely different storage such as Redis.
So your application code stays exactly the same, only this one YAML file changes that points to Redis.
So this makes it really powerful for instance to do like a local development completely locally using a local store, maybe a local container using Redis or using a SQLite instance as your local machine, but as soon as you deploy your workload to the cloud, then you switch to your Cloudflare.workers.kv.
Interesting. All right. I'm curious, in the KV side, you typically need to set up, for example, your binding.
How do you connect that?
Because I see this state .Cloudflare.workers.kv type. Yes, there is some additional metadata, so I'm not showing it here, but there's a list of metadata and some of the things in there are like an API key and they had some URLs and things like that.
So that's typically part of the metadata. Although you can do this while doing like development and testing, put the metadata in this YAML file, but when you go to production, you should use like a secret vault and then you use your secret references in here and then under the hood, it will be retrieved from the secret vault.
I'll be actually showing that in the demo that we're going to do.
Good question. All right. Let's see the demo. Exactly. Exactly. So what I'm going to do is just to go through a readme that is in a GitHub repo, so it's a Diagrid Labs, Debra Cloudflare Queues.
So I've opened it up in VS Code here. So this repo consists of a producer application and that will be our Debra application and that will post messages to a Cloudflare queue and that Cloudflare queue is pushing the messages to a worker that's subscribed to that queue.
So this is what we're going to build.
The application, the producer application is already here. Also this consumer worker is already here and but we still need to deploy this to Cloudflare.
We still need to create this queue in Cloudflare and we'll be using a Wrangler CLI for that and the application is just running locally on my machine.
So I'm talking about prerequisites.
In this case, we're using Debra, so Debra works with a Debra CLI.
That's already installed here. In the case, yeah, it's a Node application, so we need Node.js, we need to have Wrangler installed and since we're using Cloudflare queues, it's part of a paid plan.
So you have to make sure that you're also using a paid plan and in case you haven't enabled queues yet, you need to enable queues in your Cloudflare dashboard.
So let me also have a look at the Cloudflare dashboard by the way.
So this is the Cloudflare dashboard and here is the queues tab.
So I already enabled queues but if you didn't, then there will be a button here from please enable queues.
All right, it's scrolling down a bit.
So what we're going to do again, we're going to create a Cloudflare queue.
We are going to publish a consumer Cloudflare worker that reads messages from the queue and then we will run this producer Debra application that will publish messages to the queue.
So I'm already logged in with Wrangler, so I don't need to do that anymore.
So the first thing we're going to do is actually create the queue.
So we can use Wrangler queues create and then the name of the queue, in this case it's Debra messages.
So maybe zoom in just a little bit more.
Yeah, I was going to say to close the site thing, but this is perfect. All right, yeah.
So it's created the queue named Debra dash messages. So that's there.
Perfect. So we can actually verify that in the dashboard and there it is. Okay, so that's perfect.
So now let's have a look at the Cloudflare worker.
So the readme also includes steps to actually create one from scratch, but we already have one in this repo.
So I'm not going to create from scratch.
I'm just going to publish it. But before that, I'm going to show you how it looks like.
So I'm in the consumer folder now, source. This is the TypeScript file.
It's a very small function. This is it, this is the method.
You have to name it queue and then it can receive a batch of messages. We're just going to stringify those messages and then we're going to output that to the console.
So nothing very special going on here. I saw that you also added a few things to the runglab file.
Yeah, exactly. That is not very important. Yeah, exactly. That does make it a good question.
So of course, you have to give this a name. It needs to refer to the TypeScript file, but this is very important.
So here you indicate that this worker is actually a consumer of a queue.
So this is important and you have to specify the name of the queue, otherwise it doesn't know where to subscribe to.
And you have to specify a max batch size of which the number of messages you want to take in at once.
In this case, I've set it to one because I want to show each individual logging message, but you can definitely set this to a higher size because definitely more performance if you do some batching for receiving messages.
But in this sample, we only need to send 10 messages, so that's not really important.
So I'm now going to publish this Cloudflare worker.
So there we go, runglab publish.
It's now uploading it, and again, we can check the dashboard if it's there.
And there we are.
There's the consumer. So that's perfect. And the next thing I'm going to do is I'll start a log tail because we will be sending messages from the producer app to the queue, and then the consumer will consume those messages.
It's nice if you can have a look at the log that these messages are actually arriving there.
So regular tail, and it will start a log there. Yeah, that's a good detail.
Perfect. All right. So now it's time to look at the producer side.
So that contains two parts, right? So we have the application itself. So I'm in the producer folder here.
So if you look at the index file, so this is the producer app code.
This is the function. We're using the Dapr client, so we're using the client SDK for Dapr because it makes it a bit easier to interact with the Dapr sidecar.
We're specifying a host, which we are calling the SDK, and this is then the main function.
So what's important is we need to refer to our binding, and the binding has a certain name, Cloudflare queues, and that is specified in this YAML file I spoke of earlier, and we will have a look at that very soon.
But this is the only thing that you need to use in your application to refer to this binding component.
The binding operations that we're using is, in this case, publish.
So this is different for each binding component, but in case this is a queue, it's quite natural to have a publish operation on a queue.
So that is the name of the method we'll be calling.
Then we create an instance of the Dapr client, and then we do a for loop, and then we create a message like, hello world, with an incremental number.
And here we're using the Dapr client, and then we use the binding API, and then we're going to send something to the binding that then contains the binding name, so Cloudflare queues.
It contains the operation, which is publish, and it contains the message that we're sending.
And then we sleep a second, and then we go for the next iteration. All right, so the other important part here is under resources, that's the binding YAML.
So I'm going to show you this template file.
So this template file is checked into a source code, because that doesn't contain all kinds of sensitive information.
The YAML file that my application is using is this one, because that contains all kinds of keys and secrets, which I'm not showing here in this live stream.
So if I scroll all the way up here in this template file, so this is the name that our application is referring to, Cloudflare queues.
So our application is using this, and then the Dapr runtime sees, okay, I see this component name.
What is the type of component name that belongs to this Cloudflare queues?
In this case, we're using the bindings of the Cloudflare queues type of binding.
And then there are some different metadata involved here.
So you can actually use this binding in two ways.
What happens under the hood is that Dapr will actually create a Cloudflare worker for you, and it will send the messages to that Cloudflare worker, and that Cloudflare worker will then put the messages on the queue for you, because the only thing that can interact with Cloudflare queues are Cloudflare Workers.
So we are not directly sending a message from a Dapr app to the Cloudflare queue.
No, messages are sent first to a worker, and that worker is then responsible for delivering that message to the queue.
And then you can choose either to have Dapr creating that worker for you, or you can provision a worker yourself.
In this case, I'm choosing just to have Dapr create that worker for me.
So what's important here is that you specify the name of the queue that you'll be publishing to.
So there's Dapr messages.
Then there's the name of the worker that Dapr will create for us. So I just named it Dapr message worker.
Then there's a part that you need to create a private key to do secure communication, and that's described in the docs.
There are some links there. I won't go into detail into that, but you can just follow the documentation.
Then you need your Cloudflare account ID, where you can get that from the dashboard.
And you also need to have an API token for Cloudflare, and that's probably useful if I show that.
So if I scroll down here, so this is your account ID that's required, but there are also API tokens.
So you need to create an API token that the Cloudflare worker can use.
You have to give it a name, and then you have to give it permissions.
And let me zoom back out a bit. So the only permission you need for this key is account.worker.scripts.
So that's only permission that is required.
And then the last one, value is optional in case you already have a worker provisioned.
But in this case, I have no value for this because I want Dapr to create a worker.
So this is the template file. This is the actual binding file. I'm scrolling up a bit.
So that contains the same information, but then filled in with the actual keys and API keys and so forth.
So I think it's time now to actually run this application locally. So again, I can use the Dapr CLI to run this application.
So I do Dapr run. I have to give the application an ID.
So that's always required. In this case, I'm just calling it producer.
I'm then referring to the resources path that contains the YAML file.
So Dapr will read this file, and then it will know I need to connect to Cloudflare.
And then the last part of the CLI command is npm run start. So it actually runs my node application.
So let's run this. There we go. So there is some output that is shown.
So now we see that it's deploying an updated worker to my Cloudflare worker URL.
So now we can see that Dapr is actually deploying a new worker.
And this might take a while, this process, especially if you do it for the first time.
It might take up to one or two minutes. So I probably have to wait a bit. We can actually go back here to the dashboard to the worker.
It does publish it to your profile.
So you should see a new one. Okay. Yeah, yeah. So we see it's created.
It's good. But then Dapr needs to do some checks as well. So if it's there, so it does some sanity checks.
And that initially might take a bit. But okay, it's actually done.
And actually, we were too slow because it already also published the messages which the tail followed.
But let's run it again while looking at this screen now.
So we run the same command again, because now it will be faster because the worker is already there.
So it doesn't need to do that again. And then at the right-hand side, we will see the output again of the log tail.
Let's do this again.
All right. So now the application is publishing and we see new records appearing on the consumer side that contains all the hello world statements.
So we have your neat selling messages from a Dapr application to Cloudflare queues and a worker is consuming it there on the edge, which I think is pretty neat.
Yeah, I agree. And the process to get open running is also pretty seamless as well.
Yeah, exactly. Yeah. There are some things you need to take care of.
So make sure you have the right API keys and you need to create a secret, but everything is quite well documented.
And I think everyone can follow this readme that I'll share.
And then there is more information even also in the...
This is my Docker setup. There's more information on the Dapr side as well. So if you look on the Dapr docs, this is the actual specification of the binding that we're using.
So that also contains lots of information that I'm referring to in my GitHub repo.
I also would like to add on the Cloudflare side, so for Cloudflare queues, your consumer, the output data that is coming from your consumer, you can also store that in R2.
So R2 is like Cloudflare's... Like the NoSQL store, right?
Yeah, yeah, yeah, yeah. It's more like... I'm trying to remember the exact words to use.
I'm sorry. That's okay. Yeah. No, I haven't tried this yet, but I think it's nice.
Yeah, yeah. I also like it's more like a bucket of storage.
Yeah, storage bucket. The word I was looking for was unstructured data.
So the data that you get in your console, you can then batch that up and store it in R2.
So if that is also what is necessary or required of you, and to do that on the consumer side, you just need to create an R2 binding and create a bucket.
Right. Yeah, exactly. Yeah, yeah. Oh, yeah. That sounds quite easy.
Yeah. So of course, the consumer now doesn't do anything. It just did console logs, but when you mentioned, I can just add a binding in this function and then it will store it in the bucket.
Yeah, that's nice. It can be a nice extension. Yes. One question before we wrap up.
This was great. Thank you so much for sharing how to set up Dapr and also sharing the magic of keys as well.
I remember you shared an example e-commerce application in your slide.
I'd like for us to talk briefly about some use cases of this.
So if someone is looking to actually do something with Dapr and Cloudflare keys in your application, what specific use case have you seen that people are looking at?
Right. Well, I haven't actually seen anything yet that makes this crossover from any generic cloud platform to the edge.
So I'm also very interested in seeing real production use cases for this, but I think it's a great opportunity because usually people either leave their workload in one cloud or maybe they do like a cross cloud thing in using the major cloud providers.
But I haven't seen actual examples where they go from one cloud to like an edge globally distributed system such as Cloudflare.
So yeah, I think this solution has a lot of potential because for the end users, it'll be much, much faster to publish to queues and publish to workers which are close to end users.
So yeah, I think performance wise, Anita, I think it can have a lot of benefits if you workloads closer to the user.
Yeah, I agree as well. That's interesting.
So if anyone is looking to find out more about Dapr or even this demo, you want to see the demo, the GitHub repo is?
Yeah. So this is the GitHub repo. So Dapr Labs, Dapr Cloudflare Queues.
Yeah. So that's a good one. But another good one would be, I think just the docs is also, I think, good.
So docs.dapr.io. That's always a good start.
And yeah, like I mentioned before, we also have a Dapr Discord.
And there's a separate section there for bindings. So in case you want to know something more about the Cloudflare binding or also the creator or the maintainer of the Cloudflare binding and so who created it is also there.
So if you want all of the in-depth information about it, you can ask some questions there.
I don't have a slide to show mine, but also if you have any questions around Cloudflare Queues in general, we have a Discord as well.
And there's a Queues channel there where you can ask questions or just see what other people are also building using Cloudflare Queues.
And yeah, with that, we'll call the segment a wrap.
It was really nice to have you on here, Mark. Thank you so much for sharing. I really enjoyed it.
My pleasure. Great. All right. Thank you, everyone, for tuning in.
Until next time, have a wonderful day. Q2's customers love our ability to innovate quickly and deliver what was traditionally very static old-school banking applications into more modern technologies and integrations in the marketplace.
Our customers are banks, credit unions, and fintech clients.
We really focus on providing end-to-end solutions for the account holders throughout the course of their financial lives.
Our availability is super important to our customers here at Q2. Even one minute of downtime can have an economic impact.
So we specifically chose Cloudflare for their Magic Transit solution because it offered a way for us to displace legacy vendors in the Layer 3 and Layer 4 space, but also extend Layer 7 services to some of our cloud-native products and more traditional infrastructure.
I think one of the things that separates Magic Transit from some of the legacy solutions that we had leveraged in the past is the ability to manage policy from a single place.
What I love about Cloudflare for Q2 is it allows us to get 10 times the coverage as we previously could with legacy technologies.
I think one of the many benefits of Cloudflare is just how quickly the solution allows us to scale and deliver solutions across multiple platforms.
My favorite thing about Cloudflare is that they keep development solutions and products.
They keep providing solutions.
They keep investing in technology. They keep making the Internet safe.
Security has always been looked at as a friction point, but I feel like with Cloudflare, it doesn't need to be.
You can deliver innovation quickly, but also have those innovative solutions be secure.