Maximize your Application Reachability, Availability, and Performance
Presented by: Brian Batraski, David Tuber, Erika Bagby
Originally aired on August 3 @ 4:30 PM - 5:00 PM EDT
There are three key pillars to becoming an application performance powerhouse: Reachability, Availability, Performance. Explain why this is important and align to industry trends.
English
Panel
Product
Transcript (Beta)
Good morning, afternoon or evening depending on what part of the world you're watching from.
Thank you for joining us today as we discuss how to maximize your application reachability, availability and of course performance.
Joining me today are Cloudflare's very own product managers Brian Batraski and David Tuber who will go by Tubes for the remainder of this session.
So the name of the game is speed. Brian, David, I'd love to pick your brains on what the factors are that impact and influence the health and performance of public facing applications, you know, the ones that are on your phone.
That's the world we live in today. But before we get there, can you each give our viewers a little insight into what you cover here at Cloudflare?
Brian, I will start with you. Thanks so much, Erika. My name is Brian Batraski, a product manager here at Cloudflare for the last almost two years and I cover the load balancing, waiting room and health checks products and also located here in Southern California.
Wonderful. It's a beautiful day to be in California, I'm sure.
I am not there so I will soak up the Texas heat. Tubes, tell us about your background and what you cover here at Cloudflare.
Thanks, Erika. So yeah, David Tuber, call me Tubes.
I'm a product manager at Cloudflare specializing in network availability and performance.
So I own products like Cloudflare Interconnect or Smart Routing and, you know, it's basically figuring out how we can make you fast end to end from last mile all the way back to your origin networks.
And I'm located in sunny Seattle. So we get all of the sun, none of the heat.
So it's very nice. I don't think that that rings a bell. It doesn't sound well together, sunny Seattle.
And never mind, we'll drop it. Okay. So it sounds like between the two of you, we have a really great well-rounded perspective between the network factors and the compute factors that impact performance of our applications, not only in how we're able to access them and the reachability of those apps, but how quickly we can get to them as well.
So let's just kind of kick it off and talk about first off application landscape for a bit.
We have consumerization of business applications that are in turn shifting our expectations and organizational workflows dramatically.
So my first question goes to YouTube. What are some fundamental shifts that are setting the expectations for modern applications today?
Great question, Erica. When you look back, you know, like 10, even like five years ago, and you asked a random person on the street, hey, what does fast mean to you on the Internet?
The first thing I would say is, who are you? Why are you asking me this question?
And the second thing they would say is, well, I really just want to be able to do basic things without my Internet clogging up.
So like a really great example, when I had these conversations with some people for, you know, for like kind of setting expectations for products is, well, I want to be able to send them.
10 years ago, people wanted to be able to send an email without losing their Internet.
And, you know, that really is so different from where we are today.
Like it's like five, 10 years, you know, but it's a completely different landscape.
And that's really where we're at. And a lot of that is because, you know, the way that we consume software has just drastically shifted.
You know, it used to be that, you know, a great example is you take Microsoft Office, a product that's very close to my heart.
You know, five, 10 years ago, you would download Office onto your machine and you would run a local copy of Office and all of your data would be stored locally on your machine.
And if you needed to store it to the cloud, it would, you would sync, it would take a really long time.
You probably wouldn't do it all that much. But nowadays, Office is a shell that basically just constantly communicates with, you know, the Office servers and like all of the data that is stored is stored on the cloud.
It's not stored locally on your machine.
And that's really kind of where we're moving. We're moving to this cloud, this cloud landscape where everything is stored not on locally, not locally.
So that means that every time you have to talk to one of these locations, it has to be really, really fast.
We're not talking about, you know, like, oh, like it's going to take me 10 minutes to send an email.
Being able to send an email needs to be instant.
Everything needs to feel real time or else people just don't want to use these products.
You know, it gets to the point at which if you have to wait even one to two seconds for something to be done, you're probably going to be impatient and you're just going to abandon the request.
And that's really where we're at today.
Wow. Very accurate description of the historical factors of how we've built applications in the past and to how we consume them and even support them to global scale today.
Our expectations have grown with the shifts in technology have increased to the point where we have what we call a blink of an eye standard.
Brian, I'd love to talk to you about this related to performance.
What is a blink of an eye to a modern application today and why do we care?
Absolutely. So, you know, as Tuz mentioned, the standard for performance, the standard for what, you know, end users like you and I expect when we type in a URL into a browser today is skyrocketing.
We're well past the point in which if something even takes a second today, which to us, you know, would normally for regular folks would be pretty darn fast.
But when you talk about speed for the Internet, it's actually very, very slow.
And so, you know, us being Cloudflare, we want to hold ourselves to the highest of standards.
And so we use a measurement of the blink of an eye because blink of an eye on average for average person is about 100 milliseconds.
And so not only do we want to make sure things are as fast as possible, we want to engage the user experience so people perceive that speed.
And so having changes, having items display, load, having applications to be as performant under a blink of an eye, under 100 milliseconds is a good standard to build off of and then continue to iterate and make things even faster.
And so we want to make sure that we are providing the Internet in a safe, secure, and private way to everyone in the world that we can and within the majority of all people populations in the world.
And therefore making sure that the speed of when someone in Europe or in South America need to go to an application that has servers located in the United States, you know, we want to make sure that that request is as performant as fast as possible to make sure so that they have a good user experience and they continue to use that product or service, that application.
And again, continue to make sure that we are as fast and performant as possible to make sure everyone is happy on the Internet.
Right. So performance, in other words, is really in the hands of the masses, right?
Our consumers, the users, the people that are leveraging these applications that organizations are providing, they're the ones with the power because we can quickly, I mean, the competitive landscape has also grown because the barrier to entry into a lot of these industries has lowered because of the capabilities of technology and cloud and services through SaaS applications that help us, you know, move the needle forward without having to be in the data center business, right?
Without having to build out all these monolithic and, you know, real estate of servers in order to reach a scale of people that has grown beyond what centralized architectures have been able to support, right?
So in terms of reachability, actually, before we get to reachability, I want to riff with you guys just for a second here.
We didn't plan this, so you can, you know, slap me on the hand later, but you talked about some of the performance expectations, Brian, very, you know, great detail.
Tubes, would you mind giving me just a couple of factors from an architectural perspective that would impact performance?
Well, yeah, sure.
I think that, like, when you think about performance, like, let's just whiteboard this, right?
Like, you're creating an app that's going to do a thing.
I don't know. I'm not feeling entirely, like, creative for the moment right now.
But you create an app to do a thing, and you want this, and you've got users everywhere.
Are you a potential user? Or anyone is a potential user because everyone needs to do things, so, you know, obviously, you want your app to be successful.
So what are you going to do? First thing you're going to do is you're going to say, okay, well, I've built my app, and I have it running locally on my machine, and now I need to put it in a place where everyone can access it.
So what I'm going to do is I'm going to do what everybody does nowadays is they're going to go to a cloud provider, and they're going to buy a virtual machine, and they're going to put their device, or they're going to put their code on the virtual machine, and they're going to stand up an HTTP web server, and basically, they're going to say, I'm going to have an API so that, you know, my web server can, or so people who are having people on their phones can talk to my web server.
That's basically hello world. I have code, it runs in the cloud, and users all around the world can talk to my web server.
So what are the factors that are at play if you want?
Well, the first thing is, number one, how fast your app is.
Let's say your app is super fast. Every time a request hits your web server, you instantly respond, like, within, like, 10 milliseconds, which is really, really, really, really fast.
Like, that's super fast. But let's say that you've mastered that, that, like, you've on-box application performance, because you're a wizard, and you're amazing, and if you are one of those people, come work at Cloudflare, because we can definitely use you.
But let's say that you've mastered it.
So the next step is, okay, well, now, now how fast your application is, is a factor of a couple things.
The first thing is, it's a factor of how far away is your user from your web server.
So let's say that you are, like, 80% of the world, and you have your cloud instance running in US East, Ashburn, Virginia, which is the largest center of compute in the entire world, because, obviously.
So you have it there.
And so when you do that, basically, everyone in the world will connect to Ashburn, Virginia.
And if you're in Boston, that's great, great performance. If you're in London, that's not so great, not so great performance.
If you're in Tokyo, well, good luck.
Like, you may as well never start your app in Tokyo. So what you want to be able to do is you want to be able to do two things.
The first thing you want to be able to do is you want to be able to minimize the amount of stuff that you have to query from Tokyo to Ashburn.
So you buy something like a CDN service.
So you go, you go to Cloudflare, you buy Cloudflare, you buy Cloudflare CDN, you say, hey, Cloudflare, I've got a lot of cached assets or national assets that people need to query a lot.
So I'll just, you know, use you and then cache, and cache your asset, my assets on Cloudflare, then I'll be faster.
And that improves performance.
But you still have to, you know, figure out, but what if I have to go back to Ashburn?
I don't have to go back as much now, but I still have to go back eventually.
So then it turns into the other factor, which is what's the optimal path from your user in Tokyo back to Ashburn?
And that's really a factor of a couple things.
The first thing is, you know, physical distance, right? Like it's probably 8,000 miles from, you know, Tokyo back to Ashburn, and you can't fix distance.
You can't drill through the center of the earth and get to Ashburn, Virginia.
So you've got a transitional thing. So what you do is you say, well, I'm going to find the fastest path in terms of physical distance, and I'll do that.
And, you know, there are lots of different products that can offer you that, you know, Cloudflare is definitely, Cloudflare and Argo is definitely a really good product that does that.
The next thing you have to do is you got to say, well, you know, my user in Japan, what does his network look like?
How does he connect to Cloudflare or even your origin server back in AWS or anywhere really?
It may not be the greatest.
And that's just something that, you know, we at Cloudflare are super passionate about.
You know, you talk about Impact Week and Brian mentioned that and Project Pangaea is a great example of something that, you know, we're doing to basically make last mile networks better in communities that desperately need it.
And we're super happy to be able to provide that. But that's one of the questions that you have to answer that a lot of people don't actually think about.
They just assume that, you know, you'll get to, you know, your CDN or you'll get to your first mile and you get there how you get there.
You know, if Comcast is bad, if NTT is bad, if, you know, DTAG is bad, you know, that's what it is, right?
Yeah.
But that's something that you have to think about. And yeah, that's a lot. And those are just kind of the big things that, when you talk about performance, those are the things you have to think about and you break it down piece by piece and you find products to attack it.
Very, very helpful to hear that whole storyboarding that you just did.
By the way, you said you weren't feeling creative.
I beg to differ. I was very creative. Loved it. But I knew it was a risk asking you to dive into that, knowing that, hey guys, we got like 15 more minutes left here on this segment and we have content to cover.
So it was worth it. I'm glad I took the risk.
Very helpful information tubes. But it actually is a great segue into the other factors that impact the overall health and performance of the application, which is reachability and availability, which I think there are both network factors and compute factors that lend towards consistent reachability and availability.
But just to start off, let's just talk about the fact that, you know, everything that you have explained thus far, both of you really lends towards an environment or a world where decentralization is the theme.
That is the norm.
We've already gotten used to kind of working from home, and now we're optimizing that.
And the global economy that we've already lived in for decades has, you know, continued to propel technology forward for their use cases of scalability and reaching peaks of demand that are, you know, sometimes unpredictable.
So Brian, when it comes to the decentralization of, you know, our workforces and our customers, how do we approach modern applications and the design of those apps to support a reachability of a global, you know, connected world?
Yeah, absolutely. I think it's a great question. I think this is a question that, you know, is one of the canonical problems that businesses have to deal with today.
How do we, as we are focusing on the now and getting our customers, making that initial revenue, getting to the next round of funding, getting to grow more customers, having to spend more on existing products, you know, it can feel very cumbersome to think about how, what am I going to do in a year, in two, in five, in 10, when I have all these massive problems and challenges I need to solve today.
And so taking a page, you know, out of what Tuz just mentioned is, you know, first you need to think about what you're going to do today, where you're at, but where do you want to grow to?
And then kind of work back and say, okay, how am I going to achieve those goals?
And what is the architecture? What is the design that is going to promote me and support me to be able to reach those goals?
And so, for example, utilizing a CDN, having your cash rate ratio be as high as possible to make sure that you're not overburning your origin servers, that you're utilizing advanced capabilities as much as possible to reduce stress on your infrastructure, to reduce the amount of compute costs that you're utilizing month over month.
And then once you have those designs, right, thinking forward ahead is very difficult.
But being able to do that, setting up your origin service, setting up your infrastructure and thinking about where you're going to be, that's not enough nowadays, right?
You can't just spin up a few fixed amount of servers in the world, one in Ashburn, one in LAX, one in London, you know, you could have that, right?
And therefore you're now giving access, more readily performed access to your application from different segments of the world.
But then, say you have more requests coming from Europe than you do coming from the United States.
That means one of your servers, one of your origins that are supporting your application is going to be much more heavily overburdened than the rest of your infrastructure, thus creating points of failure.
And then the next step is going to be, well, how, instead of just spinning up more origin servers, adding more compute costs, which is very expensive, has a lot of maintenance costs, you know, it doesn't, nothing comes for free.
And so being able to use intelligent steering, being able to map the different geolocations that users are situated in and map them to the closest pools, find the data centers that have the lowest latency.
And again, using intelligent steering capabilities, such as load balancing or our Argo product, find the fastest path possible from the user to the origin and back.
You know, we want to make sure that we are providing tools, providing capabilities that promote and support our customers to be able to take advantage and reach those goals, rather than having to spend countless hours in design meetings and Zoom rooms saying, well, what are we going to do?
How are we going to reach this? We know we have these problems now.
We know we have to solve our customers today, reduce tickets coming in, but also that's just not enough.
We need to also think about what we want to do further and how we're going to achieve that.
And that's what we're here at Cloud Platform, we feel so passionate about we every day to do is to solve those problems and make sure that people feel empowered to be able to make those decisions and focus on the customers and focus on the table stakes.
Yeah. Like I said, I think this is what you're discussing, these services, network services, functions as a service, and the compute services that we have, the load balancing, DNS, CDN, Argo smart routing, all these services that typically would take a humongous amount of expertise and resources for organizations to build this out themselves.
And with the demands on today's consumer expectations today doesn't really allow for the time to market that it would require from a ramp up time for organizations to build this out to support the scale and the demand of business today.
It's not feasible. So when you talk about reachability, you're really speaking from terms of server capacity.
We have to think of servers as precious real estate and how easily can we scale that real estate to support the demand and the requests coming in.
But Tubes, I'd love to follow up with you in terms of reachability.
The other factor is the Internet is pretty much everywhere, right?
Almost everywhere in the world, but is that good enough for delivery of modern applications?
That's a good question, Erica.
Set me up there. Of course, the answer is no. That's not an indictment of the Internet.
It's just a fact. The Internet's not homogenous, right? The Internet that I have is not the same as the Internet that you have, and it's not the same Internet that Brian has.
We are all using different providers, hopefully, and we're all in different locations.
And the way that we connect is going to be different because each network has different connectivity models.
Some people have fiber.
Some people have DSL still. Some people have satellite Internet. And soon, some people have whatever Starlink is building or whatever they're calling it.
Some people have a lot of different things. And because it's not homogenous, that means that the path that you're going to take and the path that your users are going to take back to wherever they need to go is going to vary greatly.
And because the Internet's not a consistent state system, stuff happens, right?
Fiber breaks.
Hardware is not infallible. And that happens a lot. And it happens a lot more than people realize.
They just kind of assume the Internet works, right? And that's because we've built a lot of resilience on the Internet to try and make sure that it can stay up despite the fact that half of it goes down all the time.
And it's not half, but it's a bunch.
It's a non-zero amount. And so reachability, when you think about it in terms of the Internet, is basically how, as a customer, how can you minimize your time that your users are going to spend on the public Internet or on the Internet in general, right?
We love the Internet, and we rely on it for a lot.
But because the Internet is a bunch of different strings tied together around the globe, it's very, very easy to break.
And so when you introduce a lot of time spent traveling from place to place, you incur a lot of not only latency, but you also incur a lot of availability issues.
Because the more links you traverse, the more single points of fit, the more failure points you have, and maybe more opportunity for packet loss, for congestion, because not all links are the same size.
So a whole bunch of different things can combine to basically just be like, reachability is a function of how long you're spending on the Internet.
And as Brian said, the more that you can expand your footprint to be in more places, the more you can reduce the amount of time that your users have to spend on the Internet, which really just means that they get their stuff faster, and it's more likely that they're going to be highly reliable.
Yeah. No, I love all of those words.
Great description of reachability in terms of the Internet. I mean, you bring up some very good points that really, the availability, links breaking all the time.
It's a wonder that we even have the Internet at all today. It is very complicated.
And that's why the simplification and delivery and consumption of these services is so remarkable to me and why I love being here at Cloudflare, right?
Because we get to deliver services that are very, very complex protocols built upon protocols and protocols and coding, and all of these different factors and variables to bring all this connectivity across the globe together.
But when you think about availability of these circuits, or even the resources and the data center, the computing resources, that precious real estate, right?
How do we think about, and Brian, I'll pass this one back to you. How are we thinking about server capacity and scalability and high availability and resiliency in order to ensure that the application is available because of proper planning that allows us to predict demands of highs, peaks, and lows, right, of application requests and support it dynamically so that we're actually consuming what we want at the scale that we need it and not overpaying for the times that we're in a low?
Absolutely. No, I think this is an incredibly important topic.
When we think about back to the early AOL dial-up days, where in the early 2000s, where we were conditioned very heavily to accept that waiting was part of the game.
Waiting for that crackly dial-up sound to take place, wait for the little AOL running man to show up on your screen and then move over to saying, hey, you have an Internet connection with the little guy.
We were heavily conditioned to say, okay, this is the standard and this is what everyone needs to live up to to make sure that we are available.
But if there is some downtime or you have an error showing your browser, it wasn't the end of the world.
And more likely than not, users were going to wait, they're going to refresh and still go ahead and get your application to consume that product or service.
We do not live in that world anymore.
We talked about before that earlier in the segment that the bar for performance, the bar for what is expected across all the vectors and metrics of what people consider to be high availability, performance, availability, resiliency of the Internet is growing massively, I would say month over month.
This is an extremely fast-paced area.
And if you are not par, you will see a massive increase in your abandonment, massive increase in churn, your customers are going to go to your competitors.
And there are very few, if not any industries that exist nowadays where there's only a couple to a few players in the market, there are X amount of folks that are going to do the same version of the same product or service in a slightly different way.
And I'll tell you what, when I went looking, trying to get an Uber or a Lyft, my Uber or one of those apps don't load almost immediately, I'm closing it and going directly to the next one because my time is important to me.
It's one of the most important things that I can measure and value in our lives. And I'm not going to waste it by waiting for something to show up on my screen.
And so having high resiliency, having backups, high availability, where you have multiple servers across different areas of the world, and then if that goes down, immediate, instant fallbacks to make sure that not a single packet, not a single request is lost, then give that bad experience to that single end user, that is paramount today.
And so making sure that we have those capabilities and those systems in place so that we are achieving all the expectations of our customers is table stakes today.
And if we're being honest, as someone who works with the Internet every day, it's only going to get, the bars weren't going to be raised even higher and higher.
And so we want to make sure that, again, businesses can focus on the right things, which is focusing on making the customers happy, providing and creating amazing products and services to those end users so that they're meeting the demands that the market is setting upon them.
And so that's where their focus should be and let Cloudflare abstract and handle the rest of it for them.
And to put a point on that, Brian, this isn't just anecdotal, like, oh, we've seen, this is real data.
Amazon has done a study, they did this really famous study about 10 years ago, they did a study that found that every 100 milliseconds they shaved off their application latency resulted in a 1% growth of sales.
Google found that if you have to wait between one to three seconds per page load, you'll see 96% increase in churn.
This is basically, if you want people to use your apps, if you want people to use your web services, got to be fast, got to be available, and that is not negotiable.
Yeah. So it sounds like, in order to meet the expectations that we talked about right at the beginning of this discussion, consumerization of business applications, and even how consumers engage with different brands, you're looking at factors that vary across reachability and availability that ultimately impact performance.
It's not just about the speeds and the capacity of that server to render a response to it from a request, right?
It's also going to be dependent on how quickly that loads, which may not be a factor of the Internet or the speed at which it can travel from point A to point B, but maybe it's a factor of compute and that real estate in the data center and how well it's able to manage intelligent resource management for a request coming into applications, and even scaling to global reach.
So we have about three minutes left.
I think we did fantastic. I personally didn't think we were going to get through this much content already, so I find this a win.
But just before we leave, I would love to get a few suggestions or key takeaways from each of you and your respective purview of factors from compute and networking vectors.
So what are some suggestions you would leave with organizations and how they should be planning and preparing for modern architectural designs that support the demands on today's consumerized business engagements?
Brian, how about you go first?
Sure. So going back to kind of like one of our long themes is it can be as cumbersome as it can feel.
Think ahead, right? Do the capacity planning now. Do the foresight of seeing where we want to be today and where do we want to go and don't lose sight of that because it's easy to say, but it's a very, very hard practice.
The second is ever since we've gone through a worldwide pandemic, we've seen a massive, massive shift in traffic that would normally happen in brick and mortar in person taking place on the Internet today.
And so we live in a world now where the rules have changed and you really don't know when you're going to see that unexpected spike of traffic.
You don't know when you're going to see that massive viral effect coming from an area of the world that normally you haven't seen a blip of traffic from.
And so making sure that you are ready to tackle those challenges head on and not have to reinvent the wheel goes a very, very long way.
There are some things that are important to build internally and specific to your use case and your challenges and your systems, but also more times than not, it's going to be much more effective to go somewhere such as Cloudflare to leverage the capabilities and services that have been battle tested, battle hardened.
And so you, again, you can focus on your table stakes and on the proper items.
So I'll say that that'll be a large amount of takeaways. Love it. Tubes, how about you?
I'm just going to piggyback on what Brian said.
Capacity planning. I think one add addendum I would add onto that is, you know, product manager head on, be where your users are.
You know, this is a message not just for capacity planning, but it's also a message to take into account with network, right?
You, if you really want to, if you really want to cut down on your latency, if you really want to be fast, if you really want to be available, then you need to be present where your users are.
If your app is, if your users are in London, be in London. If your users are in the United States, be in the United States.
All of that, you just got to be where you are, be where your users are and find great success.
And on that, we are done.
Thank you everyone for watching.