Cloudflare TV

📺 CFTV Anniversary: Customer Cloudversations

Presented by Kayla Geigerman, Jacob Loveless
Originally aired on 

In this episode of Customer Cloudversations, Kayla Geigerman from the Cloudflare Customer Advocacy Team will meet with Jacob Loveless, CEO of Edgemesh to discuss their Workers use case.

English
CFTV Anniversary
Interview

Transcript (Beta)

Hi everyone, thank you for joining our Cloudversations segment today and tuning into Anniversary Week here on Cloudflare TV.

If this is your first time tuning into Customer Cloudversations, this is a regular segment where we like to shine a spotlight on our customers, learn about what they do, how they do it, discuss industry-related best practices, and other topics related to their industry.

Today we are joined by Jake Loveless from Edgemesh and he is going to talk specifically about Edgemesh's use of Cloudflare Workers.

So Jake, thank you so much for joining us today.

Thanks for having me. Do you want to tell us a little bit about yourself and what you do at Edgemesh?

Yeah, sure. So my name is Jake Loveless. I'm one of the co -founders at Edgemesh and we started the company about five years ago.

Before that, I was working on Wall Street.

I ran high -frequency trading for a company called Cantor Fitzgerald.

So speed is very much kind of my career path over the last 20 years or so.

And me and the other two co -founders, Randy Lebeau and Eugene Rocklin, left Wall Street and started Edgemesh to see if we could, instead of making you know, trades go faster, can we make websites go faster?

So it's been a fun five years.

Yeah. Wall Street, that's really interesting actually. And I think you might be the first customer I've spoken of that's made that sort of transition.

What's the biggest difference in your day-to-day life between Wall Street and now?

You know, I mean, it's certainly as hard. Website optimization, kind of making sites go faster is a lot of big data.

And obviously speed is all about efficiency and finding moments of optimization.

But I'd say the biggest change really is, you know, we would trade all over the world.

So we had teams in New York, Chicago, Tokyo, London, Sao Paulo, which was great, but also meant that my day was all over the world.

So, you know, now it's a little, I wouldn't say work -life balance, but there is life in the work.

So I get back a few hours a day to just kind of take some down time, which I really didn't have on Wall Street.

It was probably an 80 hour a week job for 15 years.

So. 80 hours a week. 80 hours a week, for sure.

All the time. That's a good chunk of my TV watching budget. So I can't grasp that.

There was no TV. When I left, I took a month off and I literally just sat and watched TV and ordered pizza and it was glorious.

It was everything I wanted it to be.

That sounds like a good time. I could talk about TV the whole time we're here, but that's not why we're here.

So why don't you tell me a little bit about what do you guys do?

Yeah. I mean, you know, in the simplest terms, we do speed as a service.

So we make websites faster and we tackled that problem a little differently, I think.

So if you, you know, you think about the problem of making a website faster, which obviously is near and dear to Cloudflare's heart, the way that's traditionally been tackled is with, you know, server-side optimization.

So the idea that we're going to make the website faster by effectively delivering it more efficiently or optimizing how we deliver it.

And that stuff's great.

And there's been tons of advancement in that, you know, over whatever, last 20 something years, probably since, you know, arguably kind of Akamai and Limelight came out with the CDN concept in the late nineties.

But what we do is client-side optimization.

So our software focuses on making the website faster by making the browser smarter.

So what we want to do is optimize, just like you want to optimize cache hit at the CDN edge, right?

You want to serve as many objects as you possibly can from the edge of the network.

We want to focus on cache hit in the browser.

So we want to actually try to minimize the number of network requests that the browser needs to make in order to render the page.

And you can do that.

We use a functionality to the browser called the service worker and the cache API.

We can create a cache for each website. So you think about the browser cache, but it's kind of shared across all the websites that you visit, right?

It's really hot and it's really small and very competitive.

And here you're kind of getting your own little private cache that's just for your website.

And then once the site loads, which is kind of a neat moment of opportunity, you know, user goes to website, website finishes loading, user now reads website.

Well, this is an opportunity to intelligently prefetch things that they might need later.

And lots of people have tried to tackle that.

And we kind of, we were lucky that we were able to crack that problem by looking at websites, not as collections of pages, not trying to guess where the person would go next, because that's really hard.

But instead as collections of assets, because most, you know, kind of sub pages actually share a lot of assets.

They share fonts, they share JavaScript, they share these things.

So edge mesh on the client side is prefetching assets that they don't yet have in their browser cache and pulling it in.

So we're kind of effectively hiding that latency, which, you know, is really great if you're like, you know, a store where a customer is clicking through products, and it's kind of this really enjoyable experience, where all of a sudden, the longer you're on the site, the faster it gets.

And so, you know, that's what we do today. And that software is all delivered from the edge of Cloudflare.

And, you know, just because we're client side, people are like, oh, you don't have, you know, this huge server infrastructure, and we don't.

But we've got this huge distributed software, because our software effectively runs on every visitor of every customer of ours.

So, you know, as we sign a customer with, you know, 100 million page views a month, well, guess what, we just took on 100 million new page views.

So. That's really cool. Yeah. So when you, as a co founder of a company, that means that one day you and the other guys you're working with decided that this is something that you needed to make, and that this was a gap in the market.

How did you figure that out? Yeah, I mean, it really, Edgemesh was one of many ideas.

And, you know, to people who are looking to start companies, I would give a handful of pieces of advice.

First one is, definitely pick a problem that you enjoy working on.

Because however long you think it's going to take, it's going to take much longer than that.

So, and then the other thing is, if you know, it's possible, and it's not always possible, but if it is possible, you want to find a niche, especially in software, it's just, there's software to do everything.

So, and yeah, I know, it's hard to find the thing that nobody's done.

But if you can find that niche, and kind of work that niche and become an expert in that niche, it's a lot easier than going out and trying to compete, you know, with established players, because never underestimate a customer's reluctance to change systems.

And then, you know, the last one is, you want to pick something that delivers value.

We don't sell software, people don't buy software, people buy value.

Website speed is great, because the value is very visceral, right?

Faster websites, faster websites convert, have higher conversion rates, they have lower bounce rates that, you know, all these great things that come with speed.

And you want to make sure that you're kind of delivering that value and that value is explicit.

So don't, don't just like build things, because you can build them build things, because they deliver value.

Not, you don't want to be in the software as a service business, want to be in the value as a service business.

I like that. I like that. That saying. So specifically for Edgemesh, what are your top IT priorities?

Yeah, I mean, we're a performance company. So like we are, I mean, I am the Charlie Sheen of web performance, no matter what performance.

So I mean, we are constantly focused on, you know, how do we deliver things faster?

What moments of kind of optimization opportunity are there? And of course, in the web, it's really tricky, because, you know, you think about it as a client server relationship.

But there's many clients, each browser is kind of like its own thing.

So you have to like, really focus on each browser. And then of course, we're kind of the web's always changing, right?

It used to be we would just deliver, you know, HTML, that was wonderful.

And that was super simple. And then JavaScript came on the scene.

And, you know, JavaScript's really complicated. And now, you know, we're using JavaScript to make virtual doms and all kinds of stuff.

And all of a sudden, now the bottleneck isn't on it, on delivering the JavaScript, right, sending the text over the wire.

But you know, once you receive that JavaScript, you need to parse it, you know, turn into effectively bytecode for the browser.

So the browser can run those instructions, right, you need to take the text of the code and turn into executable code.

And that parsing overhead can be pretty severe.

So, you know, Cloudflare's been doing a lot of work on WebAssembly. And like, we're looking at a lot more stuff that we can do in WebAssembly.

But also, once you kind of, when you're on the client side, you have an interesting opportunity in that the bytecode, the code that's been compiled for that browser, is completely safe to use again.

So we can store it in the browser, and we never have to pay that parsing penalty again, right?

You can't deliver bytecode to each browser, because each, you know, my phone is on a different CPU architecture than my laptop.

So the bytecode is going to be dead. So we're really always focusing on that stuff.

And of course, I mean, we're, we want to be, you know, we're delivering software to clients, you're dependent on third parties, I think, you know, a recent outage that happened yesterday, I think kind of reinforced that it's important that your third parties have really robust infrastructure.

And, you know, just because adding edge mesh is just adding one line of code, you're still inviting somebody into your house, right?

This is, you need to make sure that that third party dependency is dependable, that it can be delivered globally.

So we're always doing a lot of work to like, you know, make sure that, you know, if the client's in California, is it in Cloudflare Edge in California?

Or is it, you know, getting routed somewhere else?

Are we serving things up from cache correctly? Cache on the Cloudflare side, and things like that.

So those are, I mean, we're always looking at things from a performance standpoint.

Okay. That makes a lot of sense. And, you know, speaking of, you know, being dependable, what kind of customers depend on you guys?

And where are they located? You mentioned California, but I assume elsewhere as well.

Yeah, our kind of bread and butter, our core customer base is eCommerce.

So people who sell things online, largely because the benefits of speed are so obvious online store.

So I mean, I like to say that, you know, speed is an enhancing drug, meaning it takes an existing websites and kind of elevates all these things.

So you know, your conversion rate, conversion rate is probably the biggest one for eCommerce.

Because nobody shows up to the website for free, right? It's not like, it's not like brick and mortar, where people just kind of like stumble around and might walk into the cookie shop, like, big place, and you got to pay to get people there.

So you're paying, you know, with advertising dollars or with content, like, you know, 24 hour news channel, for example.

And you want to make sure that you, you know, you maximize that investment that you've made to bring them there.

So if you can, and the impact of speed is just so insane, where you can take a site, take an eCommerce company, and, you know, hundreds of these guys, where it's the same product, it's the same marketing, it's the same time period, nothing has changed.

And yet, conversion rate can go up 20, 30, 40, 50%, purely because it's faster.

Yeah. And the other thing for eCommerce is average order value.

So this one stumped me for a long time. So what we saw time and time again, and continue to see, is that faster websites have higher average order values.

And I can understand why conversion rate increases, because the website, conversely, if the website were to get slower, people get frustrated, they bail out of the site.

We do it all the time subconsciously. But a higher average order value means people are actually buying more things.

And I had a customer explain it to me, and then we went back to the data, and we're, we're pretty confident that we could prove it.

So when a site is really fast, so people come to a website, especially eCommerce with some purchase intent, right, I'm coming to buy the, whatever, the cat sweater, right.

And if the site's really fast, and it's really easy to browse products, they end up buying what they want, instead of just getting what they need.

And so you see this kind of average order value increase. And the other big one, and I say this all the time, checkout is critical.

So you want to make sure that you get checkout as fast as possible.

The user's tolerance for latency kind of has this logarithmic decay as they interact with the site.

And then once they put in financial information, their latency tolerance drops to the floor.

And it kind of makes sense, right? If you hand somebody your wallet, you're gonna want to get it back pretty quickly, like you want to finish the transaction.

So, so again, kind of just making the website faster, all of a sudden the marketing team's like, look at this, this is all great.

I'm an avid online shopper.

And all of those things that you just said, I'm like, you know what, they figured me out.

Yeah, if it's faster, I'm gonna go like, okay, well, I only came to check out the shirt section, but I have so much extra time now, I might as well go see what they have in pants, and so on and so forth.

So yeah. The other thing is that, that kind of bailing out of website and, and in ecommerce, you know, we tend to focus on cart abandonment, which is really important, like somebody going in and kind of putting things in their cart.

And I don't know if you do, my wife does this, where she'll build carts on websites for like weeks.

She's not necessarily abandoning the purchase, she's just kind of like saving, she's using the cart to essentially save what she's looking to buy.

But you'll see, as website as kind of a website slows down, you'll users just get frustrated, they bail out of it.

I mean, the kind of the well known stat is that 56% of users will abandon a website that takes more than three seconds to load.

I mean, that's a lot, that's a lot of revenue. And it's, and again, you've got both the cost side, you paid to bring these people here.

So you've got the loss revenue when they give up, but you also have the cost that you lost, you know, that you paid for that you got.

So it's like, hey, this website takes no amount of money or content or beautiful cat sweater pictures will overcome a user's impatience.

So, you know, I think this stuff is a bit more topical now is Google is doing this thing called the core web vitals, where they've kind of publicly announced that performance metrics are going to go into search rank.

So now you're kind of getting a re, I think, a kind of resurgence of interest in web performance from marketers.

And I mean, and that's super healthy. Because again, speed, speed cures all problems.

Yeah. And on that note, let's talk a little bit about your, your workers usage and how Cloudflare is helping Edgemesh achieve this.

So what led you to start using Cloudflare Workers? What, what problems were you facing?

So, I mean, we're, we're a company, I think like a lot of companies, but we're a company that grew up on Cloudflare.

So when we started the company five years ago you know, we started on Cloudflare because we knew we had to deliver this JavaScript payload to all the clients and workers didn't, didn't exist at that time.

And we started to see the beta come out. And so the, what we love about Cloudflare Workers is the abstraction.

So Cloudflare Workers are modeled after the service worker API, which is built into the browser.

So we use the service worker API every day.

I mean, that's effectively what our product is. It's essentially a managed service worker.

And when workers came out, we were like, boy, this is familiar.

And in the beginning it was like, okay, maybe we can do a little bit, maybe we can intercept some routes and do some APIs.

And then the cache API came out and we were like, okay, now we can start to do a little bit more.

And then I'd say really the big moment was when the key value store came out.

And this would have been probably about 18 months ago.

When we, when we went to Edgemesh, we release a version of Edgemesh just about every year.

So when we went to version four of Edgemesh, we said, hey, there's actually enough here in workers that we can do 95% of the API at the edge.

And that is how it works today. So v4 of Edgemesh, which came out in March of this year, has almost all the API end points that we use run on workers.

So when a browser starts up, it needs to check to see if it's authorized to run, or if we need to turn it off, if God forbid, there was an issue.

That's just a request. The worker intercepts it, checks the KB store, does its thing.

We capture a ton of real user metrics data, like billions and billions of records of real user metrics data that used to just stream to our back ends and find where we were on the cloud.

Now that just streams to Cloudflare. We capture it there, we get to bulk it up in the worker, and then we can kind of lazily send it to the back end.

We used to have to do this really complicated load balancing to direct traffic to the right cloud hosted back end.

Now the load balancer is all in workers.

Data sharding is all in workers. So like our sharding of our kind of data ingestion is all done in workers.

I mean, I would the vast majority of our software now runs out at Cloudflare.

And the performance benefits of that were pretty extreme.

I mean, you're talking a little more than a 5x decrease in median latency, which is great, and I'm all about the performance, but the cost benefits were huge.

Yeah, tell me about that. Yeah, we used to have to host.

We don't know where our customers' customers are going to be. So a customer comes, they sign up with Edgemesh, they add Edgemesh to their website, add one line of code, hit the button.

And now all of a sudden, we need to be able to service clients wherever they are.

They could be anywhere. So we had to have infrastructure hosted in the US, in Europe, in Southeast Asia.

And we were paying for a lot of infrastructure just to make sure that we could have fast response times.

And with workers, obviously, that all goes away because, you know, thank you very much for building on data centers for us.

And now we can kind of have just kind of two main cloud infrastructures, primary and secondary.

And because we offloaded so much of the API, I mean, we cut our cloud computing bill down 72%.

72%? And how long? 72% from V3 to V4. I mean, when we turned on V4. So within a year?

Yeah. Wow. Yeah. I mean, we literally, not publicly, but I could show you a chart of...

Google probably does not like Cloudflare Workers right now because our bill just went to the floor.

But it's great. I mean, that just means, you know, more money that we can put into all kinds of other things.

So yeah. And it's faster. So it's faster and cheaper.

How often do you get that? Welcome to Cloudflare, man. Because it's your problem, not mine.

So, you know, you mentioned performance, you mentioned cost savings.

What about like time savings or have you been able to scale more since using workers?

Yeah. I mean, I think just because, again, the abstraction is so similar to what we work in every day.

Development has just been really easy.

Wrangler's a really clutch tool. I can tell you another tool that, I mean, we use extensively in production.

We use Argo tunnels, which is just an absolute godsend because we can kind of spin up these kind of Kubernetes backends and not have to worry about allocating IPs or anything like that.

Plus none of our infrastructure has a public IP address, which is very nice.

But the other thing is, is developers will spin up an Argo tunnel on their laptop and use that to actually connect like backends on their laptop to do development.

So, you know, we can have, we're a remote first company, so we can have one developer in one location, another developer in another location.

Maybe this guy's working on the backend and this guy's working on the middle tier.

And we can just, I can just like, you know, hit a button and be like, okay, I just spun up a version of the new backend on my laptop.

Here's the REST API route running on a tunnel. Let's play around with it for a bit.

They can go do some work in Wrangler, you know, hitting that route.

And then it's like, okay, this all works. Deploy the backend over to GCP and deploy, you know, Wrangler deploy, and voila, there's your update.

So yeah, I mean, it's pretty rapid.

I mean, we're pushing an update to the core of Edgemash probably once every two weeks now, which is maybe twice as fast as we did in v3.

Wow.

And workers has helped with this. Yes. We helped with it. Because we moved the load balancer up there, we're able to do canary deployments really easily because you can just tell the worker like, hey, I'm going to intercept this production route.

And then I want you to roll a dice. And if it's whatever, less than five, then hit this new endpoint instead of that new endpoint.

So it's made canary deployments a lot easier.

And it's all JavaScript. So again, it's just all kind of stuff we use every day anyway.

It's actually amazing the amount of code reuse that we have between the client side and the worker side.

And have you guys looked into Cloudflare pages?

Yeah, we actually use pages mostly for our own tools.

We'll kind of stumble into a problem. A good example is tti.edgemesh .com, which stands for time to interactive.

So we had customers, time to interactive is essentially how long it takes a website before you can scroll or click or generally interact with it.

And there's a lot of performance metrics. But if you kind of stranded me on a desert island and I only had one, it would be TTI.

It's the one that has the most highly correlated impact on conversion rate. So we had customers always like, hey, if I lower my time to interactive, what should I expect to happen?

And we were like, well, actually, we have a model for that. And like, let's just stand up a worker site.

And like, there it is. And there's the model.

And you can put in your current time to interactive and what your target time to interactive is.

And it'll give you an estimate of what your conversion rate lift and your bounce rate decrease should be.

And again, like, I don't know, a day of work.

A day? Yeah. Yeah, we have one that we're doing today, actually, kind of, again, a byproduct of an outage that happened, not in Cloudflare, but on another network.

And, you know, customer question was, hey, you know, do we have dependencies on that network?

And this is actually a really hard problem. With EdgeMesh, when we capture the RUM stats, we actually will resolve each of those endpoints using, actually using Cloudflare's DNS over HTTP and a handful of other tricks.

But you don't really, like, do, my website is hosted on, you know, whatever, it's on Azure.

I don't have a dependence on AWS. It's like, well, you bring in this third party.

And this third party is actually running on AWS. So, you know, this morning, we kind of were having a chat and we were like, wouldn't it be great if we just built like a little tool where like, people could go and put in their website and see all the networks and infrastructure they're dependent on.

And, like, you know, one of the devs was like, I can knock that out. I'll just do a quick worker site.

And so, you know, maybe we'll get to the end of the day.

Um, well, we only have a couple minutes left. So, I want to make sure that we, you know, get all of these questions answered.

One that stuck out to me is, you know, you had mentioned workers KB, you had mentioned Wrangler and durable objects.

What advice can you give to those who are not as familiar with those aspects of workers?

What are some best practices around those? Yeah. I mean, I think, I mean, I think the best practices for workers are best practices in general, but keep it small.

You want to keep the code base small. And I think an anti-pattern is to kind of build this massive worker that will listen on all routes.

And I mean, we made that mistake ourselves.

You want to kind of build little workers that listen on specific routes.

So, you know, try to keep it modular, try to break it into little subsections.

It's easier to develop, it's easier to manage, it's easier to push it.

It'll also end up running faster. Because you're going to have some routes that are going to be hotter than others.

On the KB side of the house, there's kind of, your instinct would be like, so say a KB store where it's like, you know, the keys are all the websites that are customers and the values are the configurations.

You might say, oh, okay, well, I'm just going to make each one of those a key and each one of those a value.

And it was kind of, it took us some learning, but it's actually a lot better just to make one big key and one big value.

You've got room to work with. And since it's kind of lazily updated as, you know, you push an update and then it has to get around to all the guys around the globe.

It's just easier to have one big key, pull out that big value, and then search through that value.

It'll also keep that key a little bit hotter. Yeah, those are probably the big two lessons learned as we deployed this stuff at scale.

Awesome.

We have less than a minute left, so I just want to leave with this one question.

If you were to be on the elevator with somebody in the industry, a peer of yours, and they had a question about Cloudflare, what would you quickly tell them the time it got from the I mean, we always worry about security, but I don't have to worry about all of security.

Cloudflare gives us the security and the scale that you need to deliver software to customers around the globe.

So if you're in the software business, you need to deliver stuff around the globe.

You don't know where your customers are going to come from.

And honestly, it's at a price point that makes the cloud look like a ripoff.

So I just want to say thank you so much for doing this with me. This was great.

Thumbnail image for video "Customer Cloudversations"

Customer Cloudversations
In Customer Cloudversations, Cloudflare employees meet with customers to discuss how they use Cloudflare products, best practices, and related industry topics. Watch the show by following the links below!
Watch more episodes