ℹ️ CIO Week: Argo for Packets is Generally Available
In this CIO Week segment, David, Ameet, and Erika will take a deep dive into the products and features we launched today.
Read the blog post: Argo for Packets is Generally Available
Visit the CIO Week Hub for every announcement and CFTV episode — check back all week for more!
Welcome to CIO Week. We are here with David Coover, product manager for Argo, and a number of other products that we will be talking about today, certainly relevant to making the Internet better and faster.
So the other guest that we have today, Amit Nag, a product marketing manager here representing the performance portfolio and the network services working with David Huber.
So I think that the two of you today are the perfect guests for me to talk to about our singular obsession with making the Internet better and faster.
So the first question I have up is for YouTube, actually, why does it matter so much about our distinct efforts and product strategy to continue to make the Internet a better form of leveraging for scalability and reachability and putting our mission critical applications on it?
Can you talk to me a little bit about what we're seeing and why it's so important?
Thanks, Erica. Short answer.
Being fast makes money like there are lots of studies on this. Amazon's done the study.
Microsoft's done a study.
Google's done a study.
Amazon said or Microsoft said that every one millisecond make every 10 milliseconds or something makes like or every 100 milliseconds, Javed equals 1% in revenue.
Google said that, you know, decreasing page load times decreases the number of people who leave the site.
So basically, like, if you want people to use your site, if you want people to buy stuff, if you want to drive commerce, you've got to be fast.
It's got to be real time.
And it's really interesting how that's kind of changed over the last couple of years.
Like it used to be like when you would send an email, you'd be lucky if it sent in 10 minutes, right?
But now, like, everything has to be very, very fast.
It has to be in real time.
And in real time means blink of an eye.
It has to be like 200 and 400 milliseconds or less.
And that's for the end to end call.
When you press a button, that stuff has to happen instantly and you have to get that real, real time feedback.
And if you don't, you're just not good enough.
And that's just that's what our customers expect.
It's what our users expect and is as a company and a service that provides foundational Internet services.
Cloudflare has to be fast, right?
We have to if we're going to ask people to turn their foundational Internet fabric over to us, we have to be fast or faster than what they've got before.
So just being fast means we want you, our customers, to be able to make money to have a really great Internet experience.
And in order to do that, we have to be fast.
And I feel like we've had this running theme for, for as long as the Internet's been around and more, we begin to leverage it across various verticals.
And even our personal consumption, our expectations continue to increase.
When you look at organizations or industries like health care that are putting their mission critical applications over the Web and they're conducting telepresence or gaming.
Right, that milliseconds does matter.
Or financial institutions where you're running transactions in real time over the Internet.
And a lot of these things can't be controlled using like MPLS or things like that.
You're talking about consumers that are leveraging their home Internet to access your applications.
So tell me a little bit more about what your role has been at Cloudflare, the product suites you've been directly involved and a lot of the products that are making the Internet a form of.
Scale for a lot of organizations out there to be able to use the Internet without compromising on security or performance.
So what are the products that you've been directly involved in that have really helped guide our mission towards making the Internet better?
So, you know. My job really is to just get the Internet is to just get the user experience down to the speed of light.
And what that means is that, you know.
We live in a physical world, and that sounds really weird, right?
But if you think about it, at the end of the day, you're sending a packet from your house in Seattle and it's going to a data center in Ashburn, Virginia.
It has to take it has to traverse 3000 miles or so of country, of land.
And that takes time.
But that doesn't take that long. The time it takes is only about 60 milliseconds.
So if your API call takes 200, where's the rest of the time going?
How can we shorten that time to be as fast as humanly possible?
And end to end.
There's a lot of different things that involve that. Like we fixed our we build a better network to basically route you over the best possible paths.
And then and we can do that with our interconnection products like CNI.
And then and then on top of that, we leverage our own Internet intelligence to better route you around problems on the Internet.
And that's what our Argo smart routing products are for.
So you mentioned there's a lot of variables that make that speed of light happen from the eyeball or the end user to the origin where that application exists and then returning it back.
Of those variables, I want to double click on one that you mentioned briefly.
Cni you called it Cloudflare network Interconnect. Can you tell me a little bit more about what this is and how this directly impacts one of those variables of speed of light application delivery?
I mean, when you think about the lifetime of a request and especially when Cloudflare is involved and as someone who works at Cloudflare, my job to think about it when Cloudflare is involved, there are kind of two legs of the request right?
There is the leg from the user to Cloudflare, which is generally last mile ISP's and transit networks and internet exchanges that Cloudflare connected into.
And then there's from Cloudflare back to the resource that you're accessing and that traverses.
Almost a completely different set of networks from the first leg of the request.
And those legs are and a lot of latency and a lot of performance hits happen on that side.
And the problem with that is it's very much in the control of the application and the proxy.
And in this case, Cloudflare is the proxy to be able to reduce that latency.
You could basically say take a dependency on the transit networks and say every time the Internet is slow, you could call up Lumen and say, hey, lumen, you're really slow.
You should fix that. Or you can realize that Luminous probably not going to listen to you after the fifth time that you've done it.
So maybe you find another path and another path might be directly interconnecting with Cloudflare.
And the reason that you do that is that you can control both ends of the pipe.
You work with Cloudflare directly, you get a private link, which means that there's no congestion.
Your routing is more deterministic, which means we know how to route to you.
You have a dedicated interconnection point with the data center.
It makes your latency a lot more predictable.
It makes it a lot faster.
And it also gives you kind of peace of mind that you're taking the private network.
And at the end of the day, it can even end up being cheaper depending on how much traffic you're sending over it.
So in essence, what you're telling me is you're with CNI.
We're eliminating a lot of those variables that are uncontrollable whenever you're using multiple providers.
Yeah, that's a really great subversion of what I said.
Let's write that down to use that again. Right.
Okay. Well, now I actually want to pivot to a meet for a second.
So you're up.
Are you ready for this hot seat? Going to.
All right. So we launched a number of layer three network service products over the past few years.
Magic transit, magic land, Magic Firewall.
Pretty much all things magic.
Can you tell us a little bit more about the differences between these products and what they do?
That's a great question.
So we've launched a number of products and a network services arena.
So we're known historically as an application services company and application security company, right, where all of our Layer seven services and then we went to Layer four services.
And a few years ago we launched a series of Layer three network services that again, the goal is to help our customers that deliver applications faster, speed up their IP applications and ensure a better experience for their employees that customers, their end users and everyone involved.
So to help do that, we're targeting a lot of the issues that one would normally face going over the Internet.
So the first issue is if we have anything that's any application, any service that's open on the Internet, it's not a question of if, but how much denial of service traffic.
You're exactly right. You're constantly attracting traffic and constantly attracting bot traffic for any service that's open on the Internet.
So magic transit service, that's essentially the simplest way I can describe it, is it puts the entire Cloudflare network as a bouncer between your apps and your networks and your sites and the rest of the Internet.
So any time anyone's trying to target you, they need to get past the 100 terabyte plus network edge that Cloudflare offers and they need to be able to overwhelm that, which is orders of magnitude larger than the largest DDoS attacks we've seen.
Right. So they can do that.
And as a result, they stop targeting you.
If you're behind Magic Transit.
So that's our magic transit service that's been out for a few years, Magic when we launched last year.
And it's helping our customers that interconnect the sites, the data centers, the branch locations and in a way that doesn't involve back calling the central hub site.
Right. And it solves a lot of the application delivery problems we see like even 2021.
And we continue to see enterprise networks where they have one Internet exit or two Internet exits and all of the sites and all their users all over the world.
They have to get backhaul to this one data center location just because the security stack is there, just because the Internet exit is there.
It's clearly not an efficient way to run applications right now. Like if you talk to folks like Microsoft and Office 365, they'll tell you not to do that.
They'll tell you to backhaul split tunnel out the traffic straight out of the local location, the Microsoft Data Center right.
And those are some of the architectures and changes that magic point enables.
And then Magic Firewall allows you to layer security on top of that, right?
With the cloud native firewall as a service function where you don't have to backhaul the policies that are implemented in the cloud, you don't have to worry about appliances, scaling, appliances, load balancing across them.
It's just a service, it's a dashboard. And you get a policy and you apply it and once you configure, it applies everywhere.
You don't have to think about that that firewall sitting over there to get that get the rule update right.
Or am I running an older version of code on that right?
You don't have to think about all these problems.
That's, um, that's pretty interesting because a lot of, a lot of network, a lot of network firewall functions or network services to have them in the cloud, running as a network function in the cloud, that really helps you scale the, the security parameter of your edge, right?
The edge is or the security should be everywhere.
It should be at every touchpoint.
And to really employ that using touch point solutions is nearly impossible.
It forces you to have to Backhaul things.
It forces bottlenecks, it forces congestion.
I was waiting to see when two tubes cat was going to come and be our guest appearance.
Can we at least see the face while you pet her? But okay.
But these functions are much more useful and scalable whenever we're employing them in the cloud.
So we have bouncers like functions with Magic Transit.
You mentioned we have we have the ability to remove that hub and spoke legacy architecture that model and we have the ability to scale using Magic Firewall the the perimeter access and access controls across the across the land and across the edge.
When it comes to CNI, which David mentioned just a little while ago, how do these two things come together to really amplify the value?
So the way you typically connect to any of these magic fast services is the customers have a couple of options, right?
One is it's entirely logical connection using a gray tunnel from their site that goes over any network that any IP network, usually the Internet and then comes into our network and that provides the logical connectivity.
The problem with that is everything David explained earlier, which is you have an ISP in the middle, you have an Internet in the middle, and you can control that, right?
You may on a good day, everything might work great.
And then Tuesday morning, when you're about to launch something like really mission critical, it all goes to pieces and you can't control that.
And a lot of our customers don't like that.
They don't like that unpredictability that's in the middle.
So making it easier for them to connect directly to the Cloudflare network by of course, we continue to expand our data center presence.
We're in 250 plus cities right now, right globally.
But we also have partners.
We have CNI interconnect partners that folks like Megaport and Equinix packaged fabric who have about 1600 locations across the globe where they can connect if they're already connected to one of these partners.
There's just a virtual connection back into Cloudflare.
And they don't have to deal with the unpredictable Internet anymore.
And all of this is helping optimize and improve end to end performance when they use a cloud service.
You should you should just do the presentation to me, and I don't need to be here anymore.
Oh, no, we have not dismissed either one of you, in fact, because you said that.
Not because you said that.
Because it's on the schedule and the next one up is about Argo.
So you're back up on the hot seat, too, and I want to kind of bridge these things together, since so far we've talked about a lot of amazing network functions that we have employed over the Internet that have that are enabling our customers to leverage the Internet without sacrificing on performance or security.
But our goal is one that we really haven't talked about yet.
You mentioned it in the beginning tubes when you talked about the products that that you support directly.
But I want to go over a brief history about Argo and kind of bring this full circle, because so far we've talked about the access and the variables with using multiple providers and how we're eliminating that.
But how are we routing traffic more quickly across the Internet?
Can you give me a brief history about Argo?
That's a really good question, Erica.
So at a high level, we see a lot of traffic that comes through us.
We have 28 million Internet properties.
We see trillions of requests a day.
So there's no reason we shouldn't be able to learn something from all of these requests and what we really learn.
And one thing that we learn about is how fast each request takes.
And that has a lot of value, right?
Because if I have two requests that are going to the same endpoint and one goes faster than the other.
Well, I'd sure like to know why.
Or if I have two requests coming in to the same data center and they're going to different locations and one is fast in the other, I like to understand why.
And really what this comes down to is that between two points on the Internet, there are so many different ways that you can get to where you need to be.
And a good way of thinking about this is the streets metaphor is the Internet is like a town, right?
And they're like neighborhoods.
And, you know, the Internet really is not just a series of tubes.
It's just it's just a set of incredibly diverse pathways.
And each pattern of each between each, each source and destination, there are multiple different pathways.
And the question that we really want to answer is how can we ensure that our traffic always takes the fastest path?
Well, the answer is that we can see all of the traffic that's taking the different paths and choose for the customers who buy our products to choose to send them down the fastest one.
And that's really what Argo is and that's how Argo started that we basically have all of our customer data that sent all our customers send requests through our network.
And by doing that, we can construct a map of between point A and point between points A through MN, points B through MN.
There are wide diverse paths and for each set of and for each source and for each destination, for each source, we find what is the fastest path to each destination.
And once we compute it for customers who buy Argo, we send them down that path.
If we can determine that their traffic is just is destined for the destination of that plot.
And so by doing that, we basically become the ways of the Internet is, as Rustam likes to say, that basically we can detect problems on the Internet, we can see where things are fast, we can see where things are slow, and we'll dynamically route traffic around the slow stuff and down the fastest paths to get our users the best performance.
And we started at layer seven at HTTP and then we built Argo for Spectrum.
And which operates at the protocol level, therefore.
And today we are announcing the release of Argo four packets, which is layer Sweet.
So one. You stole my punch line, but that's okay.
No, I'm kidding.
Ways. Ways for the Internet.
Wave for the Internet layer seven.
That was where it began.
And way for the Internet is the best way for me to remember that as many times as I thought.
Let me find another analogy for this, because I'm tired of this one. It just sticks.
It just works.
It makes sense because it's exactly the function that Argo is was made to solve for it in that real time intelligence of rerouting.
And now you mentioned the expansion to our gopher packets.
That's it's really where I want to go into more detail on and then find out how does this work in conjunction with the other products that you guys were talking to me about earlier on this on this call with the magic suite, for example.
So let's start with our go for packets.
Can we double click on that one and talk about how is it extending that same wave type functionality to the network layer?
So let's let's back up for a second.
There's one thing that I forgot to mention, okay, is the performance benefit.
So we find all of these paths and what does that actually do right at layer seven, it can offer you up to like a 35 to 37% performance benefit.
So basically your latencies go down by 37%, which is pretty big, right.
Like being able to find the fast path definitely matters.
So how can we extend that to layer three?
And so we basically and.
We'll start out by saying that by doing this we can achieve a 10% latency benefit for using Argo for packets, and that's on top of all of the other ones because they're additive, they apply at different layers of the stack.
So we get different benefits at each one.
. Let's talk about how that works, because anybody who's familiar with the application stack will know that there's no such thing as latency data between at the IP layer, you can understand how long something is going to take because it's just you don't have that data.
All you have is the source and destination and you have the BGP path and that's cool, right?
But so how do we build that?
And so we, we actually use one way latency is constructed at layer four.
So we basically build our own map between between all of our data centers and find the fastest paths around them.
And then what we do is we take a customer and say, hey, if you want to use our four packets, you need to get a CNI.
And it's really good that we talked about CNI.
You plug in to that data center and now we have a deterministic path for your traffic through our network.
So we'll always know that, hey, if this customer if we're sending to customer Acme, ACME is plugged into Ashburn, so we'll send whenever we get a truck, when we ever get traffic from Phenix, we'll send it to Ashburn and then Ashburn will send it to Acme.
And they should have really should have stopped using a is in that analogy.
But the point here really is that by plugging into Cloudflare, you become part of our deterministic latency map, which allows us to find the fastest pass through our network that we're constantly computing.
And you can see some real latency gains depending on your network structure.
And 10% is an average, but there are definitely scenarios where you can see huge wins simply by moving to Argo for packets.
And at the end of the day, we're optimizing your network and you don't have to do anything.
And that's a real benefit for you because as we said, every single second matters.
So Tubes, this is kind of one of the interesting things that I'm thinking about as I hear you say this.
Right, is like historically, if you look at the application stack and the way folks do application app developers, they treat the network as plumbing.
It's a place to get from point A to point B, it's an AI. It's two IP address endpoints and there hasn't been really good communication correlation coordination between the application stack and the network.
And historically, network folks have seen apps as just like packets, it just payload in the packets and there hasn't been much awareness of what's going on.
Right. And IP routing, BGP routing at shortest path first. It's not really taking into account how the links are performing, how, where, where there's network congestion.
And we've seen attempts in the past to kind of solve these problems like RSVP, traffic engineering, MPLS.
And they haven't really taken hold. Why do you think that is?
And what's different about Argo that it's actually been working for a lot of our customers for several years, so.
I want to kind of talk a little bit more about.
So one of the reasons that Argo is really, really good is because Argo is kind of network agnostic, right?
So like because Cloudflare is a network built on top of networks.
There's no, like the concept of us having an MPLS network.
We don't have one, right?
We have a network. And that network is layered on top of other networks.
And it includes our own Internet, our own private backbone.
But we have so many paths to choose from.
And one thing you said is that BGP has computed BGP, which is how the Internet normally routes is computed by shortest path.
But it's important to note that shortest path actually has it's generally not correlated with latency at all.
So you could go through a one, you could go through a one hop path that's really long and takes you halfway around the world because that's just how the network is built.
Networks aren't deterministic. You can't necessarily assume that the path that you think the network is going to take is actually how the path is, actually how the traffic is going to go.
And that happens in it's super dynamic, right?
Like the Internet changes all the time.
And generally people try not to touch stuff, but, you know, things break routes happen.
We've seen just hardware fails and then things change. And the thing about that and the thing that one of the reasons that Argo is so good is because Argo reacts to that change and makes sure that your traffic doesn't change.
And whether that's shifting from one transit to the other at layer three or layer four, or even just going through a different Cloudflare Data Center at Layer seven, we are looking at all of the different possible paths, whereas MPLS will just say, hey, I just, you know.
I'm just packet switched at the end of the day.
So that's something that Argo has on everyone else because we just see more of the internet, we get to see more of the board so we know what moves to make.
It's almost sounds like we're bringing a lot of the layer for intelligence into routing decisions through the network.
So can you tell us a little bit more about what specific metrics we're using for Argo and for packets?
So when you talk about when you talk about latency, there's the big one is or you talk about performance, the big one is latency.
How long is this taking?
We calculate one way latency between our data centers and basically construct a bunch of different paths back to every single endpoint that we could potentially hit.
And we also calculate loss because loss impacts latency, a loss, a short loss path is more likely, more often than not going to take longer than a long, stable path.
So calculating packet loss also allows us to make these smarter decisions and be more informed about how our data about how your data goes through our network.
makes sense. So this was this was a lot of great detail about sort of the overall all encompassing performance related products.
And then in our network services, again, just covering it from multiple, multiple providers, how do we minimize and reduce some of the vulnerabilities and unpredictable nature of the Internet and the multiple service providers that we're leveraging, removing some of those variables, making the Internet more deterministic, more precise routing, leveraging metrics at layer three, layer four and layer seven, using Argo to root intelligently by sharing that intelligence across all of the requests that we get in our global network and our ANYCAST global network, and then really sort of leveraging that intelligence to feed our services and the value add that it brings to our customers across the board.
I want to kind of understand a little bit more as we're wrapping this up, how all of these things come together.
So we have we just talked about our go for package or go in general.
But how does our go for package specifically work with CNI and with the magic suite?
So we talked about before about how our Argo were quiet, like you need to have a network interconnect to for us to define, to determine a path from our network to yours.
So Argo and C and I are for Parkinson's and I go great together. In fact, you need it.
It's it's more than just they go great together.
It's like peanut butter and jelly.
You can't really have one without the other, I suppose.
You know, you can substitute bananas or chocolate or whatever.
But let's be real here.
But for the magic suite, Argo really, really works and makes everything better.
And a really great example is with our magic wand product.
So Amit was talking a lot about how Magic Wand is a massive upgrade over Sd-wan technology today because you're basically deploying Sd-wan functionality into the cloud and letting Cloudflare manage your SD-WAN setup, which reduces hairpins, improved performance, all that stuff.
Well, our go for packets and see and I take that a step further and a really great example is so let's say that you have a network and this is actually described in the blog, let's say you have a network in three places and those places are connected to Magic Lan.
When you connect to Magic WAN right now, you connect over the public internet, so you still need your SD wan boxes or you need some sort of boundary between you and the public internet because that's how Magic WAN connects to you.
With CNI, you can just plug into us.
You don't need those boxes anymore because Cloudflare is your network boundary, so you can send all your private traffic through this device.
You can send a lot of private traffic through the CNI.
It gets optimized with Argo for packets, you get a 10% latency reduction on top of the magic LAN latencies that you already were seeing.
So not only you, we're making you faster, we're making you more secure.
And with Cloudflare for Offices is our new office on ramp.
You can connect in more places so you can really work and take even more steps toward making your private network really private and truly software defined.
Because at the end of the day, you will always have these little boxes, you'll always have these sd-wan boxes.
And like we like to say and like Sd-wan vendors will tell you, Hey, it'd be great, you know, like, they're pretty much just general routers, but.
They're not, and being able to move towards a truly software-defined network privately on top of Cloudflare will make your life a lot easier because you don't have to worry about that stuff.