Cloudflare TV

💻 Developer Week: Product Chat on Geolocation Announcement and Observability Partnerships

Presented by Jen Vaccaro, Steven Pack, David Song
Originally aired on 

Join the Cloudflare Product Team to discuss everything new that we announced today for Developer Week!

Read the blog posts:

Developer Week

Transcript (Beta)

Hi, everyone. Thanks for tuning in today to our series on product discussions and announcements following everything that we've launched during Developer Week.

So I'm here today with David and Steve.

I'll let them introduce themselves in a minute. I'm Jen Vaccaro.

I'm the product marketing manager for Cloudflare Workers, our serverless platform and Cloudflare Pages, which is our JAMstack platform to deploy and host sites.

So today I'm really excited to talk with David and Steve about what we have new for our users, particularly with workers and making it more powerful and easier to use.

So we'll touch on new observability tools with Cloudflare Workers that Steve will talk about.

We'll also touch on new personalization and geolocation that we're enabling with workers.

And then we'll explore a little bit about our announcement with NVIDIA.

So first, I'll get us started by introducing David and then Steve, you can introduce yourselves.

Sounds great.

Yeah. Thank you so much for having me on here. I'm a PM intern on the Workers and Pages team.

I guess outside of that, I'm a CS major and currently a junior.

Sounds good. So we'll get started today, David, with your announcement. But before we do that, Steve, do you want to just let them know who you are and what you do here on our team?

Yes. Hi, all loyal Cloudflare TV watchers. My name is Steve Pack.

I'm on the strategic partnerships team at Cloudflare. And specifically, I'm on the solution engineering part of that.

So really looking at how to map partners and customer requirements from our biggest partners and our biggest customers and all of our developers and bringing those partnerships to our customers and making sure they're valuable and they work and it's all good.

So looking forward to the session today, Jen. Sounds great. So we'll get started today, David, with your announcement.

We launched some pretty exciting things regarding geolocation personalization.

So why don't you get us started by just telling a little bit about what's new in this space that we wrote about today in the blog?

Yeah, for sure. So we just released geolocation data for developers and workers.

It means you can access location data about where each request is coming to when users send requests to your workers.

So, for example, you can get some information like the city or like the continent, the postal code, like latitude, longitude, about where the requests are coming from.

Sounds great. So can you give us a little bit more of a deep dive on what are we seeing these enabling for our users today?

So what are some example use cases or things that we're already expecting users to start doing or think that might be might begin in the future?

Yeah, for sure. So this has a lot of different applications for our users today.

So, for example, we can see it being really useful for creating location based apps, for example, like an automatic shipping calculator for an e -commerce platform or building new types of location based apps.

So everything from social media networks to creating like Yelp-like products where you're interacting with services in your own area.

So we hope this makes workers an even better platform for developers to build off of and hope to see more exciting applications there.

Is this a good time for me to share the demo? Yeah, that sounds good.

Sounds good. Let me start sharing my screen. I have a couple examples of some fun ways to use geolocation data.

So for the first example, I'm just going to share with you what kind of data is available.

So, for example, we can get some cloud-specific data such as which colo the user is accessing from and some other important pieces of information such as time zone and latitude and longitude.

So using this, getting this information is actually pretty simple.

I'll show you the source code for this Hello World application. You can also access it by pulling up, finding it on the latest blog post.

The snippet is linked in here along with the source code.

So looking at the source code, you basically can just access these properties through the request object.

Very straightforward and easy to use.

But it is actually helping you create lots of cool, complicated features, too.

So another use case that you can use this for is for creating customized designs, UI designs.

One thing could be for automatic dark mode for users or customizing the language of a design to be country-specific.

So you're showing the users the most relevant information for them.

So this here is a clock that changes the background based on your location or your time zone.

So it's showing light blue during the day and then gets to darker green during the night.

So another way you could use the geolocation data for is building this for location -based apps.

So a really simple one that's demoed here is a weather app. So you can use the user's location to automatically give them the most relevant weather data for them.

So this is hitting a weather API and showing me the most relevant weather info for my location here.

And I'm currently close to Redwood City in California.

So there's a lot you can accomplish with just the few fields of geolocation data.

That's great. And I know we've had some users who've been able to parse out geolocation data in the past with workers.

So it's really interesting to see how we're making this available for everyone and very easily accessible.

Can you just tell us a little bit how could our users with workers go about implementing this?

I know you shared some of the source code. Maybe walk them through some ways that they could get started.

Yeah, for sure. So like you mentioned, this is a pretty cool step because this feature used to be enterprise -only and pro-only.

Now it's available to everyone, everyone and all developers. So it's really easy to get started by just referring to these code snippets.

I can show you where these code snippets are in our documentation.

So we have a lot of great examples if you're looking for how to use workers on our workers docs.

So if you go to worker site, click on our docs and examples, we have lots of different examples.

And I just added a few for geolocation. So you can refer to these for inspiration to see how you could use it.

And also our full technical breakdown of the let me see.

Of what exact data you get. It's under requests. Actually, let me just go to the direct link for it.

It's somewhere in here. Yeah.

Runtime APIs request. And then you can see the full technical breakdown of the types and what information you can get.

Yeah. That sounds great. I'm just thinking back to some of our enterprise customers who'd been using this feature.

And I remember one of them was for sports broadcasting and they wanted to have only certain areas.

Like one would have the Mets, one would have Houston, depending on what the area was, they were able to direct live for the head banner image of their landing page based on their users.

And that was one of the ones that was very widely used was for sports broadcasting was one of the examples I remember.

And I'm trying to think of another one.

I think we've also had a lot of users who are able to use this personalization based on legal compliance.

So, we've had some users in the past who will share it.

Europe has certain compliance with cookies and being able to access certain privacy information from the browser and things like that.

So, we've had some users who were able to change live on the fly, change the little cookie, annoying message that we all get when we're logging into a site and they tell us they have cookies, they're able to change that depending on what the local compliance laws were.

So, those are a couple ones that I can think of off the top of my head that we had on the enterprise level.

I know a lot of our users were enabling that.

So, it will be really interesting to see what some of our users enable who are on pay go or free.

But maybe you can tell us about that for a second here.

Which plans are available for this? I believe it's every single one now, right?

Like even our free, no limits. Yeah, it's available to every single developer.

So, it's available on the free plan. So, if you sign up today, you can get running right away with your location data on workers.

That sounds great. And as far as some of the feedback that you've heard, I know your blog just went out this morning.

Are there any sort of things you've heard or anything that you would like to be able to add to this in the future, kind of thinking of what we might enable our developers in this sort of personalization space going forward?

Yeah, for sure.

I think it'll be really exciting to see what users are continuing to build with this.

I've gotten some DMs on Discord. We have a very vibrant developer community on there.

I think if you haven't checked it out, you can find an invite link to the Discord on the workers Cloudflare homepage.

So, I highly recommend checking that out and giving us feedback on there and sharing what you're working on.

I just joined a week ago, David. I agree. Awesome. There's a lot happening.

It's like folks building awesome stuff. It's quite a fun place to hang out and you can't help but learn what's happening too, like seeing what people are doing.

For sure, for sure. So, I think this is currently geolocation is a really good feature right now on workers.

I think it'll be really exciting to see how you can connect the geolocation feature to other Cloudflare products too, like if you can also easily integrate it with pages and making that bridge simpler.

So, then I think we'll see some more applications being built using multiple Cloudflare products together.

Sounds great. Anything else, David, that you want to tell us about geolocation before we talk about some of the other announcements and then maybe go into a roundtable discussion on all of these and the rest of the week?

I think we covered all of it. The only other thing would just be like, feel free to come on the Discord and message at us, like share if there's any problems you're having, and we'd of course always love to hear and check out like some projects that you're working on as well.

Sounds good. So, and if anyone has questions for David, feel free to write them in the chat.

We'll talk now about Steven and observability, but we'll definitely go back as well to some of the geolocation and conversations with David.

So, feel free to share some messages or questions that you have around that as we go into talking about observability with Steve.

So, Steve, I know one of the big themes that we have for developer week in general is it takes a village and definitely observability is a place where this is very much true.

And we have some tools available already that we announced during serverless week with observability and with workers, but we have a lot more that you announced today.

So, maybe you can get started telling us a little bit about what you announced this morning.

Cool. Yeah, Jen. I mean, first I'd like to reiterate what you said.

Like think Cloudflare, we have a habit. I would say it's a good and it's a fun habit of building our own stuff.

And so, no doubt you'll continue to see the workers team add more and more observability features to workers itself.

And yeah, those announcements you're referring to of like Wrangler dev, Wrangler tail, you know, they really, really do help, you know, someone who's getting started with workers and writing their first worker and, you know, actually seeing what it's doing, what's happening when it's executing on the edge.

It's actually really amazing when you think about it. Often you can like run one of these things, like do your hello world worker console.log, I'm here.

Deploy it to production, like run Wrangler tail and then execute it and you see your log.

And it's like, yeah, cool. Of course I see my log. I had a console.log statement.

I expect to see it. Until you think like, actually you just deployed that code to 200, you know, plus getting closer to 300 now, I think data centers and that like whichever data center happened to take your request, it executed, you know, sent that.

I don't even know what the connectivity is between Wrangler and, you know, and edge, but like that it was executing far away.

The logs appeared there, right there in your terminal.

Like there's a lot happening for that.

And, you know, so that in itself is a really great innovation. And I think the team's done an amazing job to sort of hide, right?

Like hide the complexity of serverless and make it a great developer experience.

And that's awesome, right? But it can also mean that you can get a fair way in developing your project, whether it's a, you know, big enterprise project or just a side project of just focusing on the functionality, right?

You're not worried about scale. You're not worried about this stuff because you just, you know, that it scales and then you deploy it and you're sort of like, cool, it's great.

All my tests are passing. Everything runs well locally.

I'm going to deploy it to production. And then you're like, ah, I don't have the same sort of visibility as I would if I had all my dev tools running, you know, I need more, right?

And, you know, so the difference today is that we're announcing partnerships and like expansion of existing partnerships with a bunch of companies that are like really integrated into the worker's ecosystem, either with like custom packages or, you know, by virtue of the fact that they speak HTTP or that we have some sort of integration.

And so I'm sure we'll probably go through them, but, you know, Century, New Relic, Datadog, Sumo Logic, Splunk, Honeycomb, you know, all sort of, you know, either leaders or innovators, you know, in this space.

And so delighted to announce, you know, the partnerships with them and, you know, talk through what folks can use it for.

Sounds great. And so one thing, just taking a step back, if you could just tell the audience a little bit about what Wrangler is.

I think maybe some will be familiar, some might not.

And maybe you can give them a little rundown on what that is.

Cool. Yeah. Good point. I forget that not everyone lives and breathes workers every day.

So Wrangler is the CLI, the command line interface for workers. And so it allows developers who typically like to work in the command line or at least like have some command line capability in addition to their IDE, enables them to create new projects, to build those projects, to test those projects, to see logs in their local environment for those projects, and then ultimately to publish them either to a temporary like domain that Cloudflare provides, like, I can't remember the exact one, or to their own domain that they've brought on to Cloudflare.

So Wrangler makes all of that super easy to do.

Yes. Sounds good. And anyone who's getting started, like in the workers documentation, there's really easy instructions on how to get running with Wrangler.

And before we get started on the new announcements, I know you briefly already touched on Wrangler Dev and Wrangler Tail, which we announced during Serverless Week.

Can you just give a brief little more summary on exactly what each one of those is and how they enable users?


Yeah. So I think Wrangler Dev, probably the best way to think of Wrangler Dev is like to actually try out a worker or to think about what a worker is, you know, it's something that executes in response to an HTTP request on Cloudflare's edge.

And that's not always straightforward to simulate, right? Like to really do it, you need to have, like if you think it through, you sort of need a whole Cloudflare edge deployment sitting on your computer, which you don't have.

So Wrangler Dev is a way to simulate that where you can send requests to an address at local host and they will behave asterisk, like the same, like there's a couple of, you know, things that can't be fully replicated.

But by and large, you can replicate the environment that that worker will execute on the edge on your local environment.

And so you interact with local host and it executes as if it were on the edge.

So it's a really nice development experience, particularly for folks who have maybe, you know, used other platforms where, you know, the sort of compile, edit, debug cycle, as you might call it, is, you know, writing your code, deploying it to the destination, waiting for a long time for it to replicate, calling it, waiting for a long time for that cold start, which we don't have in workers, eventually seeing the output of your code, which if it's an error, you get to start again.

You know, like really tightening that cycle to make a really rapid sort of development experience.

That's what Wrangler Dev does. And Wrangler Tail, you know, like our first foray into observability where you're actually able to connect to a production worker and, you know, and get logs from it.

So like anything you stream to console .log will stream to your terminal. And it's a great place to get started.

That's like in my blog post, that's where I start, console.log, you know, I'll deploy it, I'll try some things out.

But, you know, as I sort of allude to in the blog, you know, as soon as you do anything more than hello world, things start to happen.

Import is not what you expected, integrations don't work what you expected.

And that's when you need to start integrating some of our partner tools to help you, you know, observe, monitor, diagnose, debug, you know, everything to do with running a production application.

Yeah, sounds good.

I know that Rita, our PM for this, she had called it for Wrangler Dev, she called it edge on local host, which was unique to us.

And we saw some other folks in the market trying to kind of take that branding as well as being a popular tool.

Because it is a unique feature that we have, being able to run that edge on local host.

But as you said, while these enable some useful observability in your early stages of building out your application during development and testing, there's a lot more to be done.

And that's what we really are announcing today with the partnerships that you wrote about in your blog.

Before we get started on those ones, just a quick foundational question.

I know a lot of customers that I've talked to ask us about logging, and particularly with Wrangler Dev and Tail, those are for customers using Wrangler, and not all of our customers do use Wrangler.

So before we get started on the different partnerships, can you just tell us a little bit, like, are all of those going to be for customers who are using Wrangler?

Or what's the process going to be there for some of the we'll talk about?

Yeah, good question. And the good answer is no, Wrangler is not required for any of them.

Wrangler really is a very convenient CLI, like command line tool, for those who want to use it.

But under the covers, it's calling the Cloudflare API.

It's doing the same sort of things that you would do if you're either deploying using Terraform, or if you've got a that calls our API, or if you're using the Cloudflare dashboard UI to do your workers development.

So yes, no, no hard dependency on Wrangler for any of the partnerships today.

That's great. And that's definitely going to excite a lot of our customers who've been waiting for that, especially our enterprise level customers who don't often, like, get their hands dirty in some of the Wrangler world, but have other ways, like you mentioned.


Great, great. So why don't we get started a little bit? In particular, I know the two that you touched on early in your blog were around Sentry and New Relic.

So maybe we can get started a little bit with Sentry. And why don't you tell us about what they are, what they do, and how we'll be integrating with them?


I'm happy to do that. And I want to be respectful of our partners here, where I think they would all agree they excel at different things and would all probably acknowledge that one partner does one thing better, another partner does another thing better.

But really, they all do aim to solve observability challenges. It's like, what is my code doing in production?

And is it doing the right thing? Is it doing the wrong thing?

And if something went wrong, how can I debug it? There's quite a bit of crossover.

Amongst that. So if I talk about one partner doing something, it's not to say another dozen, it's just we've got to use some name, some time to do some examples.

So I think Sentry was the first one I mentioned in the blog.

And they do have a heritage of errors, of being the platform where if you send all your errors to Sentry, they'll do a good job of helping you understand where they came from and why they're happening and try to surface the context to help you understand what's going on.

And so let's start there. So that's what Sentry is, in my words.

I'm sure if we went to Sentry .io, we'd see something not terribly different.

But I think your question was sort of like, how would you get started, right?

How would you get started with Sentry and workers? Exactly, yep. So I'm going to share my screen.

Let me just bring up the right tab here. Okay.

Share screen. And I'm gonna take a wild risk and share my whole screen here because I need to show a few things.

Can you see that okay, Jen? Yes. Okay. So I am on npm.js.

And here we have an npm package called Toucan.js, written by a Cloudflare employee, Robert, who thinks on our front -end team.

And Cloudflare, we love dog fooding workers.

And we use Sentry. And so not hard to imagine that as our team started to use workers more and more, they also wanted observability of what was happening with those workers.

So we built Toucan.js. And it's a JavaScript module that allows you to integrate with Sentry from your worker's code.

So really, besides installing the module, this, let me zoom in a little here.

This is really all that's required to integrate your worker with Sentry.

When you sign up for Sentry, you get this DSN identifier.

That's what identifies your code. And so from here on in, you can use this Sentry object to do things like add a breadcrumb, which I love that terminology they use.

What were the things that happened leading up to some event?

So that if it goes wrong, you've got some clues. And then equally, you can use the most common like Sentry.capture exception message later on as well.

And so that's what the integration looks like. The nice thing here is that some of sort of integrations with observability platforms, they have some assumptions about like the architecture of your application.

Like whether it's stateful or stateless.

And generally, like workers are stateless until you integrate with some state service.

Something like we have some, we'll be announcing some in the future as well.

So there are some assumptions there. And what Toucan does a good job of is making sure that the assumptions are correct for workers.

And that is that each execution doesn't have any state, there's no reliance on global variables, and things start showing up in Sentry the right way.

So that's the code. Just as David was talking, I got started with just pasting in basically Robert's sample code.

And, you know, deployed it, executed a couple of times where it sort of went okay.

And then basically uncommented this error line and did it again.

So just to sort of make sure we can see what's happening here.

If I can find a part of the window I can grab. Uh-huh. So I've deployed this worker to internal hello.

So when I something went wrong.

So we can see we got this throw error, we hit the catch block, we did Sentry.capture exception, and we returned to 500 saying something went wrong.

So cool.

Like in production, how would I go about debugging that? And I would go to Sentry, find my project, which I think I'm already here.

And so I've executed this three times now, and there were three errors.

And so let's dig in. Like what happened?

So you can see these were nicely grouped. So we can see that these were all the same event because they all had the same error, same stack trace.

Like it's clear that this is a problem.

And this is a thing that Sentry does well because if you have an issue executing happening a thousand times and another one doing one, you should probably focus on the thousand.

And so we get some nice help here. We get line numbers.

Pretty sure there's source map support, too. And actually this looks pretty right, even straight out of the box.

Well, let me check that. Is that right?

We threw error on line 24. So maybe not. There is source map support that I haven't sort of set up.

But here's the nice thing. We saw this sort of like breadcrumb, right?

We had this breadcrumb saying that we're about to do something. That was that thing that failed.

The exception was thrown. And as you build out bigger applications, you add more breadcrumbs, you have more events.

And so the more you add, the more valuable they become.

But literally I set this up in the few minutes before our demo.

It was NPM install, import, copy and paste the snippet, add in my ID here from Sentry.

Now go and change so people around the world don't start sending error logs to my Sentry deployment.

And there we go. We're integrated.

So that's how easy it is. I did publish this with Wrangler. But you could have typed this code into the Cloudflare UI and it would have worked the same way.

Sounds great. Thanks for getting that up. And you definitely did so quickly.

So thank you for that. So as far as these log and information you got. So the users will go directly into the Sentry then to see what all of the information is.

Yeah, that's right. Or to our other partners if they integrate with those partners.

So yeah, the integration is we make it easy to emit those events, like to emit a stream of events from workers to the observability platform.

And then the observability platform is where they've built up all this expertise and how to identify, present, sort, filter, all that stuff to really diagnose a real production application.

Gotcha, gotcha. And one of the other things you mentioned I just wanted to quickly touch on is you were talking about how Sentry helps with managing state.

And you alluded to some of what we're doing as well on that side, which I imagine is Durable Objects.

For anyone who's not familiar with that on our team, Durable Objects is sort of our newest distributed data platform for strongly consistent storage and managing state on the edge.

So do you want to talk for a second on, do users, I guess they don't even need to be using Durable Objects or anything like that to be integrated with Sentry?

No, definitely not.

There's no dependency on any Cloudflare feature for these integrations. My point there was more like sometimes if you take a library that wasn't built with serverless in mind, it can have assumptions that break when you bring it to the serverless context.

And the nice thing with the Sentry integration is it has the assumptions of serverless built in.

I think that goes for all of the integrations here, but just if anyone's ever sort of experienced that, that's what I was getting at.

Gotcha, great. Thanks for walking us through that a little bit. And I know we have several other ones I definitely want to make sure we touch on and can deep dive for a few of them.

So the next one that you had mentioned in your blog was about New Relic.

And I was wondering if you can tell us a little bit about New Relic, maybe give us a high level overview like you did with Sentry, and then share a little bit on the story there with our integration with them.

Yeah, so New Relic, also an observability platform, like I mentioned with all of that crossover.

I certainly came to know New Relic as targeted at larger customers.

I think they probably now also sort of, whereas Cloudflare started down market and moves up market, I think New Relic might do the opposite.

But very widely used, very well -known platform.

And the good thing about it is, as I sort of put in my blog as well, that they have HTTP endpoints to send these logs to.

And so while we haven't developed an NPM package in the same way we did for Sentry, I put the code snippet there in the blog.

It's really quite a simple matter to send logs and monitoring information to New Relic using Cloudflare Workers.

And again, I think in terms of the differences, I think New Relic really became, they almost popularized the term of application monitoring, APM.

So that's probably where their DNA is.

But again, another very widely used platform to observe, monitor, diagnose, and understand everything that's happening in your application, be it from the browser to your workers or to your internal systems.

That's great. And one thing that's interesting about New Relic is they are public reference, but a customer of Cloudflare and of worker, they run certain pieces of their application on workers.

So it's definitely interesting that we're getting sort of the meta story here a little bit where you can use workers and then you can connect that with New Relic and New Relic is running on workers as well.

So that's an interesting story there.

And one thing that you mentioned, I think both with Sentry and New Relic is that Cloudflare had been using them, we'd been using them internally, right?

You said- When I was, so yeah, before this role, I was a product manager and I know at least my engineering team at the time, yeah, we had both Sentry and New Relic internally.

Yeah, which is really interesting, sort of a thread that I've been talking about with folks on these product discussions throughout developer week, or at least yesterday, is that because Cloudflare, we've built all of this up ourselves, we know kind of where the gaps are, like what tools users are needing, which ones are working with workers, and we dog food with workers all of the time.

So we've been able to see really what partnerships work the most for us and make that an even easier experience for our customers so they can kind of learn from everything that we've done in building up our platform and dog fooding.

Yeah, I agree.

All right. So I think that touches the Sentry and New Relic one. Was there anything else you wanted to add on New Relic?

Nope. Sounds good. So in addition, I know you also mentioned that we had some existing partnerships and some expansion as well with some of the partners such as Datadog, Sumo Logic, and Splunk.

So maybe you can walk us through a little bit on each of those and kind of tell us what an overview might be for them and how we were using them in the past and then what's new with our partnerships there.

Yeah, happy to. So Sumo, New Relic, and Datadog were all, are all, Cloudflare analytics partners.

And so this was born out of the fact that when Cloudflare is the first sort of, what do you call it, like the first sort of, well, let me rephrase that another way.

When you have Cloudflare in front of your web infrastructure and web requests hit us first, we generate logs, right?

Each of those hits is a log line. And then some of them are filled from cache.

Some of them go to the origin. And that's a lot of data, a lot of very valuable data that can help someone understand how their application is performing in production.

So not the sort of internal application logs, but the actual like HTTP logs.

So those partnerships have been in place for a long time.

They're integrated with the LogPush product. So a couple of things that have happened recently that is worth sort of calling out.

And let me share a window quickly.

So the types of things that we push to logs, like LogPush.

So LogPush, by the way, is the product where you can set up a destination for your Cloudflare logs.

And that could be something like, generally, like an S3 bucket or as your blob story, something like that, or a high level service, like one of our analytics partners, where they can directly ingest our logs to their platform.

And so that's what we have with Datadog, SumoLogic, Splunk, and some others.

So traditionally, they were HTTP logs. It's like, what was the response code?

What was the user agent? What was the path? All important information.

But as folks started using workers and the invocation of those workers become a critical part of how their application functions, we've expanded those logs to include things like, what was the result of the worker?

What's the result of the worker if it made a sub-request?

If it went and called out to some other APIs in a sort of an API gateway fashion, what was the result of that?

How many did it make? How long did it run for? And so this allows customers to do things like, if they made a code change and they see that worker CPU time has gone up, then they're able to review and say, after this release, actually, we started using a whole lot more CPU.

Was that expected? Was that intentional? And so it's really just bringing, I guess, surfacing more workers execution, like runtime execution information into those logs.

And so that was already there today. And that's just bringing workers execution to the same level as HTTP logs.

The thing that we were talking about in the blog is that all of these partners, they also have, and I'll stop sharing for a second here, they also have direct HTTP endpoints to send workers logs to.

So sort of what I showed with Sentry with those breadcrumbs. If you already use one of our partners for your HTTP logs, or if you use them as a full-fledged sim where you have a security team doing detection response, then it probably makes sense that you also send your application logs.

So the internal things to those same systems, so you can start correlating things.

It's like, oh, we did this release.

Oh, worker CPU time went up. Oh, what was happening in that worker when this started happening?

And you'll see your application logs as well. And that also allows the detection and response teams to be able to do interesting analysis on not just the invocation, but also what's happening internally.

And we had a screenshot in there in the blog, in Datadog, setting up an alert just like that.

So yeah, it's really just this sort of bringing together not just logs over here and HTTP logs here and worker execution here.

The fact that you can bring all of it under one roof and do really sort of complicated and sophisticated analysis, it's just one more thing that helps bring workers to the forefront of not just writing code, but like operating and managing large scale application infrastructure in production.

Sounds great. Yeah. I know we will have a lot of users who are excited to be integrating these partners with workers in particular, especially with some of the features that you mentioned.

So just to touch on one of the last partners that you'd mentioned was Honeycomb and thought maybe you can give us an overview of Honeycomb and how it works.

One tidbit, if anyone has been tuning in during the week, just right before this session, we had Charity Majors.

She was interviewed about observability. She's the Honeycomb CEO. And if anyone wants to tune in, you can catch that session.

Again, it'll be airing this evening after 5 p.m.

So you can check the Cloudflare TV schedule if you want to hear as well directly from her as well.

But Steve, why don't you give us a little overview on what Honeycomb is and how we're starting to work with them?

Okay. Jen, only because I have her quote in front of me that I'll correct you that it says Charity Majors, Honeycomb, CTO and co -founder.

Okay. CTO and co-founder. Great. The co-founder is always what confuses me when you're trying to remember someone's role.

No. So Charity, I'm sure, did a much better job than I would at describing Honeycomb.

Because when I first came to it, and it was Erwin on our team who introduced me to it, another product manager on the workers team, he was like, Honeycomb is different.

I was like, okay, how is it different? And he waved his arms a lot and was very excited.

And I was like, oh, that sounds cool. And it does take you a little while to get your head around it.

And you really have to go use it to understand how it's different.

But I think the way I would describe of how they approach things differently, and it maybe comes from the fact that I think Charity was at Facebook, I think it was, before she started Honeycomb.

And obviously, Facebook know a thing or two about scale.

And they know a thing or two about segmenting users and understanding how individual users or cohorts experience that platform.

And that's the thing with Honeycomb. So similar to all of our partners, you can just stream a bunch of logs to Honeycomb.

They make it easy. And we do have an NPM module for that, which I'll bring up in a second if we have time.

But what Honeycomb does differently is they make it really easy to understand how individual groups of users are experiencing your application.

Like, yes, if something's broken, it will show you.

And they also have ways to drill down and understand things.

But I think this is the bubble up feature that they talk about, where the platform will actually look at your data for you and look at all of the dimensions of the data that you send.

And by the way, they have unlimited dimensions.

I don't know if it's unlimited, but that's another thing. You can have as many ways to slice and dice your data as you like.

If there's some attribute of a user or a transaction that makes sense, you can send that.

And they make it really easy to slice and dice and to build queries and all of that on it.

But then in the background, they do smart things too, like go and do some of that for you and say, oh, by the way, I don't know if you saw it, but if you filter by this and look at this dimension and this time period, you've got a problem.

And so instead of trying to figure out yourself, and they actually do have good features too to help you gain some of these insights, but they'll also bring things to your attention.

I don't know if this is a true example, but I think of it as users over 50 in Singapore have a hard time after 4 PM, you should probably go look into that.

And so they've taken a different type of approach to surfacing observability information.

I think that's the best I could do in describing it. That sounds good.

And also I'm pretty sure it's Honeycomb. I know we've had a lot of partners that could be mixing one of these up, but doesn't Honeycomb also show you like heat maps, like where there's certain issues that you can kind of focus in on?

That sounds familiar.

I wouldn't say that the others don't have heat maps. I don't know offhand, but yeah, they've certainly got some cool visualizations.

And I'll show you just one thing quickly here.

Let's see if I can zoom in again. So when you integrate the MPM module for Honeycomb, which is in beta, I should point out, but still definitely useful.

It has sort of knowledge of sub requests. So often when you execute a worker, sometimes you will execute and then return a response, but very often you want to do something like either do a security check or maybe transform something.

You then want to call some other API or multiple APIs.

And so those are called sub requests. And yeah, the integration has sort of knowledge of those.

So you can sort of see like, okay, this was the total execution time of a worker, but actually it was broken down into these multiple sub requests.

And so again, just another way to help you get like insights into what your worker's code is actually doing to help you operate it in production.

That sounds great.

That sounds really, really awesome. So as far, so all of these just kind of recapping, they're just a bunch of different tools that we're now enabling for workers.

Some have their specific areas of nuance that kind of make them different from the other, but they're all tools that we're now making very simple and easy to integrate.

And users will be able to go directly. We'll have a clear, easy integration path for users to go directly into those tools, whether it's Honeycomb or New Relic, they can go and see the logs directly on the partner site.


That's a fair summary. Great. This is exciting. I know we'll have a lot of users who are going to be testing this out.

And so you mentioned the Honeycomb module, still in beta.

Is that the case for some of these other ones or these other ones are not considered beta?

So the other one that had an integration in NPM was Century, the 2CAM module, that's not marked as beta.

So that can be considered ready for use.

The others don't have specifically built integrations, but we call out in the blog, the rest endpoints that you need to call to start ingesting logs to those platforms from within workers.

That sounds great. So we've had a couple of questions come in, but for David and Steve as well, folks who are listening in, feel free to send some of these questions.

We'll go through answering them.

I'll give a quick overview after that on what we announced with NVIDIA. But just to respond to some of these questions.

So we had one coming in, David, for you.

They first of all, congratulated us on the awesome announcements for the week.

And then they asked, so particularly regarding geolocation information now being surfaced and GDPR, CCPA, et cetera, do we need to show consent items to agree to one page in app for our visitors in regards to location data?

Is it all considered personally identifiable information, PII?

If so, where is their location data stored in case of GDPR data export requests coming in?

David, did you get the question here?

Is there some info you have or do we want to point them to some docs?

Yeah. I mean, that's a great question. Very clear. I don't think I'm qualified to answer that question there, but it seems like we don't really store that data.

It's kind of like running per request on the server side. But yeah, Jen, do you have any better?

I'm not really sure where to point that question to. I would just add in slightly there.

I'm glad the developers asking about that. It's awesome that developers now have GDPR and privacy front of mind.

But just dabbling in this area a little bit, requirements vary from jurisdiction to jurisdiction and there's no better way than asking your lawyer.

And that sucks if you're an indie developer, you've got to just do your research.

But if you're in a large company, you tend to have someone who's qualified to answer that because I'm with you.

I struggle to keep track of it and answer definitively. Yeah. And we did have one of our, I'll have to look at which customer, but they had a public story with us where they were able to, that was the one I was mentioning, where they're able to set up a worker, particularly based on the geolocation for the GDPR and different requirements.

But they probably had to speak to their lawyer and look at some of the nuances for each place before they were even able to set up their workers.

So I guess that's our short answer is there's probably a little more to look into on that side than maybe we can fully say today on this call, but it is a good question and definitely worth considering.

We do have another question coming in, Steve.

So we have a lot of users who've dabbled in things like AWS Lambda and some of our different competitors who have observability tools.

How do you think these tools that we have today and that we're announcing with our partnerships helps workers in comparison to some of the other serverless platforms that are out there?

Great question. I think what it does is it gives developers the opportunity to choose best of breed, right?

You can go look at, try Honeycomb, try Sentry, try Datadog, try New Relic, try Splunk, try Sumo Logic, try them out, right?

Because I show in the blog how easy it is to integrate with all of them. You could do in less than a day, integrate with all of them, send your logs from a test worker to all of them, and then decide which one works best for you, and then invest in building out with that partner.

The problem with the clouds, and one of the reasons they're very successful businesses, is they do a good job of vertical integration, right?

They give you everything, but they don't give you the best of everything.

And they're often quite open about this. It's like they give you a convenient service so that you don't have to leave that platform, but it's rarely the best.

And so pick a cloud, it will have some level of observability for its serverless implementation.

But I will pretty well guarantee you it won't be as good as all of our partners at their specific things.

And so we're not taking a bet, and we're not saying this is the one.

We're saying, here are the market leaders in observability, and here's how easy it is to integrate with them from Cloudflare workers.

Choose what works for you. And yeah, I think that's one of the things of Cloudflare in general, right?

When you have us in front of your workloads, you're able to decide how to distribute those requests.

Maybe it does all need to go to the one cloud.

Maybe you have hybrid cloud. Maybe you have multi-cloud, like whatever it is.

So this is just another example of that. We give developers the chance to choose the best of breed solution that works for them.

Yeah, that sounds great.

I know we've had this question come up a lot. So thanks for answering that.

Yeah, we'll be interested to see folks who are using different partners.

If you want to tweet at us or get into the Discord community, we'd love to hear how that experience is going, which ones you're using, and how that's working for you.

So we definitely would love to stay in the loop as you start using these different partners.

So just in the last few minutes here, I wanted to talk a little bit about what the announcement we made today with NVIDIA.

I'll just give a quick summary there, see if any questions pop up, and then we'll go to just a couple closing comments and chats with David and Steve here.

So one of the last announcements we had that went out at 10 AM Pacific time today was our partnership with NVIDIA.

And so that was announcing that we will be working with them to bring AI to the edge with NVIDIA GPUs and the Workers Platform.

So today, people are probably familiar with this, but machine learning models are often deployed on the expensive centralized servers or cloud services that limit folks to regions around the world.

And NVIDIA and Cloudflare want to change that. And so that's what we announced today.

And it's also a developer-focused week for NVIDIA. So it worked out really well that we were able to announce this in alignment with their week and ours.

So the idea is the combination of NVIDIA accelerated computing technology and Cloudflare's edge network will enable a few things for our developers.

So one is that developers will be able to use familiar tools such as TensorFlow to build and test machine learning modules.

They'll also be able to leverage NVIDIA's massive platform and Cloudflare's to deploy applications that use pre-trained or custom machine learning modules on the edge.

So that will give really high performance to these AI and machine learning use cases.

And we're really excited to see where this goes with users and using NVIDIA's GPUs.

So this isn't something that we have something tangible today, but we do encourage folks to check out our blog and the press release on it.

The blog is called Bringing AI to the Edge with NVIDIA GPUs.

And if you go through that blog, there's a spot at the end where you can sign up for more information to get notified of when we have workers AI fully available.

But this is an announcement that we are working with them for something that we'll be able to announce even more information in the future, but in alignment with what David shared and what Steve shared, we're making the workers platform more and more powerful.

And so with NVIDIA, we want to take that a step further and really enable some of those applications that run really well with GPUs and in addition, you'll obviously have the geolocation, custom launch observability that David and Steve shared.

So we're building out this platform to make it something more and more powerful every day.

So I just wanted to share a little bit about that announcement because that was the third one that we had go out today.

So if folks have any questions on that, you can also feel free to drop any of those questions in there.

Steve or David, I don't know if you had a chance to look at that announcement or if you had any questions off the top of your head on that as well, or any comments on like what this might mean for the workers platform.

Surprised you didn't mention Portuguese tarts, Jen. I know that's true.

So the reason I mentioned that, I did see the blog. Yeah, super exciting.

I think it is, as you said, it's an early announcement with a really interesting, just sort of, well, a fun demo of using what's it even called, like image identification.

There's probably a better ML term for it than that.

But yes, to identify whether something's a Portuguese tart or not, which probably reminds folks that our CTO is based in Lisbon.

Yep. And I was wondering, and maybe if he's watching or someone on his team watching, was that also a play on the Silicon Valley episode where they did the like, is it pizza or not?

Or is it a hot dog or not? Yes, hot dog. Hot dog. But it didn't classify anything else, only whether it was a hot dog or not, which I remembered.

So I was wondering if it was a play on that.

No, super exciting. It shows where this platform can go.

And I think this idea of being able to run algorithms that are best suited for GPUs on the edge, rather than having to backhaul that request or that traffic to a sort of a centralized cloud region, which could be far away from the user is super exciting.

So yeah, can't wait to see where that goes. What do you think, David?

Yeah, I think it'll just make workers even more powerful tools to use.

Yeah, I think that'll be some fun example, a lot more fun examples we can share once we have that.

Yeah, yeah, I'm definitely excited to see. And traditionally, most of workers have been run on CPUs.

And folks can definitely stay tuned for our announcements during the rest of the week, we'll be talking about even more powerful ways that we can run compute on workers.

So even though this announcement with the GPUs and NVIDIA is definitely forward looking, that doesn't mean that this week, we won't announce some real tools and ways that you can really build out these sort of complex use cases using workers, which is something particularly during serverless week, we wanted to start that message and bring it out here with announcements like NVIDIA and some other ones we'll have this week is that serverless doesn't have to just be for like these small edge use cases or running like these trivial sort of functions.

It can be a place where you build all of your applications and you get there's some serious benefits that you can have from that, especially on an edge serverless platform like ourselves.

So we only have a few minutes here.

I just want to give a second to Steve and to David, if you want to give us any sort of closing thoughts on the announcements you had or anything that you're really excited about with workers or that developers are doing just in general, be it with workers or anything else that you're seeing exciting in the space, if you could just share some closing thoughts or comments there.

Okay, all right.

I gave you the pause, David. Now I'm going to walk into it. No, just thanks to our partners.

I think we're on a similar mission to make it easier and better and make performance, so production workloads to run better and to delight users and really excited to work with them and thanks for their teams getting on board with the announcement and just everything behind the scenes to make that happen and looking forward to growing the partnerships over time and looking forward to the workers team growing the platform over time and it's getting more and more exciting.

Yeah, I think for me it's just similar thing, just like get involved, get into the Discord and share your feedback.

I think we have lots of local users and it really helps us build a better product for all developers, so I think we've been really appreciative of all the partnerships and of course all the developers we're working with.

Great, thanks so much. Yeah, so thanks to our partners as well and we have just about 30 seconds here, so I'll just wrap us up.

Thanks everyone for tuning in.

If you didn't make it, we had a whole series on Monday, yesterday and Tuesday this morning.

Those will be replayed this evening, so 5 p.m pacific time we'll replay it, so if you miss any of those definitely feel free to tune in and stay active on our blog and our Twitter account and our Discord to be informed of everything we have coming out of the rest of the week on workers and so much more on our developer platform.

So thanks everyone tuning in and thanks Steve and David for a great session.

See you later. Thanks for listening, it was great.

Thumbnail image for video "Developer Week"

Developer Week
All the building blocks you need to create & deploy full-stack applications on Cloudflare. Tune in all week for exciting new product announcements and more!
Watch more episodes