Cloudflare Strategic Partners Panel Episode 3: Console Connect
Steven, Michael and Tom delve into Console Connect, the recently announced partnership, and where the industry is headed.
Transcript (Beta)
Hello, hello Cloudflare TV viewers. Welcome, welcome, welcome. This is episode three of Cloudflare strategic partners panel where I'm delighted to be joined by Michael Glynn.
Say hello Michael. Hello, how are we? Thanks. And Tom Paseka, say hello.
Hey, good day everyone. This is going to be a great episode. We're going to be talking all about PCCW Console Connect and our recent partnership between you guys and us at Cloudflare.
Maybe start off with introductions. I'll start. My name is Steve Pack.
In addition to your host, I'm also on the strategic partnerships team here at Cloudflare and I run the Cloudflare network interconnect partner program which we're delighted to have PCCW Console Connect part of.
Tom, would you like to introduce yourself?
Sure, I'm Tom Paseka. I look after interconnection at Cloudflare and now into the infrastructure and strategic interconnection team.
Cool, Michael. Thanks, Steve. Yes, Michael Glynn. I'm actually based in Australia but PCCW Global being a global network and a global company.
I'm VP of digital platforms including our software defined interconnection platform Console Connect.
Okay, very good. Actually, maybe for anyone suffering some cognitive dissonance when you have three Aussies representing US and Hong Kong based or global companies, nothing untoward there.
It all just worked out that way which is fun.
Cool. So, I reckon a good place to start would be with you, Michael, to tell us a little bit about like Console Connect at the highest level.
I do want to frame it in terms of the customer sort of challenges we're facing but I think just sort of to really understand first what Console Connect is, I think it'd be a really good place to start.
Okay, so Console Connect is a interconnection platform.
So, realistically that we've at PCCW Global have a global network.
We have a network that stands about 200 countries. We run an IP network and an MPLS network.
So, as you can see there, we today have roughly about 800,000, 800 plus thousand fibre, 800,000 kilometres of fibre capacity in roughly about 200 countries.
So, the map there that you can see, we have a large capacity on multiple subsea cables that create our core network.
So, Console Connect is our automation software.
It's a platform that we put on top of our layer 2 network.
Today, we've turned it on in 43 countries. It enables our customers, carrier and also enterprise customers to instant provision international circuits on demand or direct to cloud or direct to our key partners such as Cloudflare.
In today, just over 350 data centres. What we've got also is that because we are in roughly about 200 countries, we've got 59 offices around the world.
We actually work with local loop providers so we can backhaul that Console Connect back to the enterprise building.
So, customers can access our software -defined interconnection platform in a data centre or they can talk to the team and we can backhaul that to the enterprise building.
So, we've turned this off, we've turned it on, sorry, in 43 countries.
We've interconnected key cloud and key partners on the network.
So, our customers can purchase services on demand. They can line up services instantly.
It takes just under a minute across the network. Yeah. Okay.
So, stuff there. We're going to have a variety of attendees today. Like some folks who are just interested, you know, at a high level like what is this?
I've never heard of this, you know, to like, you know, the people who are making the purchasing decisions to like network engineers.
So, you mentioned there even like a basic thing, right?
You said we run IP networks and MPLS networks. So, for the uninitiated out there, like what's the difference between those two things?
Yeah. Look, we run a layer two network which means it's our network.
We put in infrastructure in key data centres and have large capacity between those.
So, it's a secure private network which is separated from the public Internet.
And all these lines here that you can see, they're all the different subsea cables all around the world.
We have large capacity on those. Is this accurate? Like, so is this each line on this map is literally a like subsea or direct?
It's very much a marketing but you can actually see they're in Australia going out there.
You can see the path they're going to the US which is the Southern Cross Cable Landing Station.
You can also see path there that going up to sort of it should actually go up to Guam from there.
That's sort of one. So, we've got multiple cable systems around the world where we have capacity or we're a consortium member in some of these cables as well.
In the Middle East at the moment, we're building a cable called Peace.
It's where we're a key consortium member there. So, from there, we link that up into our core where customers there can purchase, you know, one meg to hundreds of gigs across the world on our secure network.
So, it's 100% off the public Internet.
Okay. And so, like when you were sort of talking about some of the benefits, like you mentioned the sort of like the virtual like turn up.
And, you know, for someone who say never ordered a cross connect in a data center or who's never ordered a circuit to like bring two networks together, like can you sort of for the, yeah, for the folks new to it, like what's the difference in experience there?
Yeah, look, especially when you're ordering it on a software defined interconnection or SDN platform, you need to make sure that it is on a private circuit.
So, if you're in a data center, you can order a service literally by ordering a cross connect to our rack in that data center.
And from there, we allocate different ports into our network.
It can be a one gig, 10 gig, 40 gig, 100 gig port to our network.
From there, a customer then can light up what we call a virtual circuit.
That virtual circuit is pretty much a layer two dedicated circuit between two points on the network.
Those two points can be that port or the partner port.
It can be a cloud. It could be Cloudflare. It could be another port that they have in another country.
So, if they want to light up an international circuit, you know, traditionally, you'd have to go to your carrier in the US.
It could be Verizon, AT&T in Australia, Telstra or BT or Colty in Europe.
There, you would do a call up the salesperson, get a quote.
It can take, you know, a couple of days. Salespeople do the negotiation.
From there, it can take 30, 40 days to provision the circuit.
Yeah. With console connect and a software defined interconnection platforms, the connection is there ready.
So, you run a port to our platform, which means that you're live on our platform.
We do charge a fee for you to be live.
Think of it like a gym membership. You pay a fee to be part of that ecosystem.
But once you're part of that ecosystem, you can spin up services to anyone else on that network.
Plus, you can buy services on that network. So, we've got a dedicated portal or a centralized web portal.
You can find that at consoleconnect.com.
Customers can log in there. They can register. They can see everyone who's part of the ecosystem.
But once you're part of it and you have a port, you can light up a service to that partner on that interconnection platform or that fabric.
The provisioning time tends to take 30 seconds to a minute. But you can light up a circuit there just for one day.
So, you can have 10 gig, for an example, from a port to a partner port anywhere across the world in 43 countries and light it up just for one day.
Nice. It's funny when you said like, you know, you got to call up your salesperson and get a quote.
It's a common sort of saying in the startup industry, which is look for businesses that still involve a lot of paper or still involve a lot of phone calls.
And yeah, I get the impression ordering network circuits is still a little bit of that.
To be honest, the salesperson is the easiest part of the process.
When you actually get into provisioning the traditional circuits, it's a nightmare where you have to take days.
You still have to go through ordering cross-connects in the data center.
You find that parts of it are broken, parts of it aren't patched. It's an absolute nightmare.
Yeah. And customers now, enterprise customers, are looking for that private connection, that low-latency connection to anywhere across the world, to their partner's SaaS platforms.
You know, it could be cloud, it could be SaaS, it could be UCaaS.
Especially if you're a large enterprise customer, you need that security, you need that private interconnection, which is something that we do.
And it's something very different between us and many others is that we are a carrier.
We are a global carrier. We have a core network underneath. We are powered by the PCCW global network.
Console Connect is just our software -defined interconnection product, which sits on top of our core network.
Yeah. I think we'll come back to the carrier aspect in a little bit because I think that's helpful to talk about.
But there's one thing. I think we've got sort of an idea now about Console Connect, but we haven't sort of addressed why we partnered.
And I think it's maybe helpful to sort of take a look at one slide here, which is basically the overall architecture of Cloudflare Magic Transit.
And so, need a little bit of a history lesson to come up to speed with the sequence of events that led us to partner with Console Connect.
But I think it's worth doing that a little bit.
So, Cloudflare famously really started 10 years ago and became really well known for protecting layer 7 websites from DDoS attacks.
We're a large CDN network, class sort of leading DNS provider. But for a long time, that was what we still do, offer free and like your $20 a month plan, your $200 a month plan.
And ended up over time with 26 million websites on us and became like can I say global powerhouse that we are today.
And as that happened and as our customers became more and more upmarket, which was even anyone interested watching the Cloudflare sort of earnings calls, like that was one of the things that came up that increasingly sort of we're servicing larger and larger customers.
Those customers had larger and larger needs. And one of the things that I kept saying to us is, okay, you protect our websites, you might provide load balancing at layer 4, you might protect a TCP or UDP application.
But I've got all this IP space that I want to protect.
I don't just want you to protect those things.
And the product and engineering team kept hearing this and was like, okay, well, we protect our own data centers.
We're the target of these attacks that we protect our customers from.
So why don't we offer that to our customers? And fast forward to now, that product is called Magic Transit, super successful.
And this is the sort of broadly how it looks.
And just to quickly explain, so customers with their own IP space, they'll delegate that for Cloudflare to advertise to the Internet.
So Internet, any traffic coming from the Internet to our customers, it hits the Cloudflare network first.
It doesn't go directly to them. It hits us in any of our 200 pops.
So that means wherever the eyeball is, be it South Africa, Brazil, Australia, US, it hits the closest pop.
We scrub all of the bad traffic and then deliver clean traffic to our customers.
And it's been really successful because we do that in every single one of our data centers.
Whereas a lot of the other DDoS providers, you have to do this.
Is that this tromboning or backhauling?
Do you call that when you've got to go to a- I mean, they would backhaul it and trombone it through one of three or four or maybe six locations around the world.
So if you're in South Africa, you're going to have to have all of your traffic go up to London and then come back down again.
Yeah. Not good. Adds a lot of latency.
And so whereas what we found in Magic Transit, the latency will be in some cases, very small, in some cases, zero.
In some cases, actually, we see customers get a performance improvement because traffic is getting on the Cloudflare network earlier and we're able to make intelligent routing decisions.
And so instead of saying, we can offer DDoS protection, but double your latency, all of a sudden, it's we can offer you DDoS protection and you don't suffer a penalty, or maybe even your performance is improved.
That was a popular message.
And that's why the product was successful. And we started to hit some issues on implementation, which is that we would deliver the clean traffic from the nearest Cloudflare hub to the customer over the Internet.
So this sort of link here, initially, in the first few implementations was over a GRE tunnel.
And for some customers, that was fine.
They were comfortable with that sort of setup. But for certain customers with certain security postures or policies or regulations or whatever it is, said, no, we can't have any public, we can't have any Internet facing infrastructure.
You need to give us ways to connect to you privately.
And so we started to talk to customers and we're like, how would you like to do that?
And one of the answers was physical, right? It's like, okay, we're in the same data center, let's connect.
But inevitably, sometimes we weren't in the same data center.
Sometimes we weren't in the same metro, or maybe the minimum connection speed at a data center was a gig, and they only had 100 megabits of traffic.
And so that's when really we started the conversation with you, Michael, and PCCW Console Connect.
Does that sort of sound like a familiar sort of story?
It does. On both sides, we've got customers that are looking for what you've got.
And also, you've got customers that are looking for a direct interconnect of the public Internet that are in the same metro, different data centers.
I think I calculated the other day in Europe, we're in 53 metro zones.
So anyone in those metro zones that have a connection to our network can actually directly interconnect into Cloudflare.
Okay. And so I mentioned the one gig sort of limit, or minimum. Is that accurate?
Is it one gig or like 10 gig your average cross-connect be in a data center?
For us, port size minimum. No, no, no. Not for you. So like if someone was doing an old school physical and weren't on the platform.
I mean, the old school ones can be any speed.
But because of the equipment we run to run efficiently, we can't do less than 10 gig.
And so that blocks a lot of possibilities for one gig and below.
Cool. So this is what I was sort of getting to, Michael. So it's like, you know, the cost to run a 10 gig physical connection when you might only have a, say, 100 megabits sustained data transfer rate makes no sense, right?
So tell us where do you start?
Yeah, look, we start at the one gig our customers get access.
Their access into the network starts at one gig. But you can light up a service that's, you know, one week if you want.
So once you've got access into there, you can run multiple circuits out of that one port, you can have circuits going to your DDoS partner, you can have circuits going to your cloud partner, everything, everything.
Think of it like a freeway. And all you're doing is splicing it up to different zones on where the traffic is going.
So customers on the network, and once you've lit it up, you can actually look at different statistics on that virtual connect, you can see sort of the jitter and the utilization, everything going across the network, and everything happens in real time.
So it gives more visibility also for the technical team within an enterprise environment to actually see what the traffic is doing.
Yeah. Okay, tell me this.
So I gave the like, you know, the magic transit example. The other example I hear from customers is, like, is origins, right?
That like having their like origin server, you know, completely off the Internet.
So the only like public, public face of that origin is Cloudflare.
So say, like, say I'm that customer, and like the example we gave, I typically see 100 megabit, like sustained data transfer rate in a particular metro.
So I order my 100 megabit connection with PCCW console connect.
And then I decide I'm going to run an ad during the Super Bowl. And I think my, you know, my transfer rates going to be a little higher.
So what do I do?
Does does the traffic just stop at 100 megabits and I'm dead? Not at all. You can log in and turn the bandwidth up as you need, right?
So that's software defined interconnections.
So everything happens in real time, you can log in, you can look at your, your network between the two points, let's say you've got the 100 megabit, and suddenly something happens and, and, you know, you need to increase that 100 meg to five gig, but you just want it for one day.
So you can actually log in and structure it so that you can turn it up, and it takes probably 30 seconds.
And then after that one day, you can actually just bring it down.
So which means that all you're doing is paying for the bandwidth that you're using at that time.
So, you know, traditionally, back in the old, I shouldn't say old, because I used to sell it.
You know, again, you'd have to go back to the salesperson again, you'd have to go, you know what, I need this bandwidth.
And then the carrier would turn around and go, oh, you know, we need that for 12 months, if you're going to increase, and it's going to take 30 days.
So, you know, that is the beauty of software defined that you can just log in, do what you need, bring it back down and just pay for what you need, or pay for what you use.
That's, that would be an awkward conversation.
Yeah, I really just want it for a day. Yeah. Okay, that's cool. So one thing, you know, one thing that struck me, like when, when I look at sort of our, like our launch partners for this program, it sort of feels to me like there's, there's your sort of data center centric interconnection platform and your telco sort of centric platform.
So like, what, I was gonna say, why is that? But I mean, in some ways, it's obvious if it's that's the heritage of your company, but like, I guess it'd be helpful to understand like the benefits or like what's unique about offering an interconnection platform like yours when you're a telco.
Yeah, one, customers are looking for multiple routes, low latency routes, but also contention is a big issue as well.
Because we are a carrier, and we've got a lot of capacity underneath us, that's a key benefit.
So we don't contend our customers. So if you're ordering 10 gig between London and Paris, for example, then you expect to get 10 gig, you don't want to share that 10 gig with others, or even if it's just 100 meg.
So being a carrier, we make sure that we run a non contended network.
And in some cases, through the Africa's through the Middle East, it is hard to turn up 10 gig, which we don't allow on our on our network.
There's only certain capacity. So we don't contend our customers.
At the same time, yes, we're in, you know, 350 data centers, those data centers do include, I think there's 45 equinixes, there's global switches, there's, there's a whole heap of range of data centers.
So customers can get us in multiple data centers, just not an equinix.
And also being a carrier ourselves is that we can drag that port, that one gig port, or the customer just wants 100 meg dedicated in an enterprise building.
So we've got roughly over 200 local partners around the world.
You know, they range from Joburg, they range from Chile, they range from the US that we have interconnects with, that we can drag that port back to an enterprise building.
So the customer there can have a direct layer to interconnect from an enterprise building straight through to wherever they need to go to on the platform.
Yeah, interesting. Because like one thing, one thing that struck me there is you're sort of saying like, you know, you can provide both, you know, but the sort of the transit or the like the, you know, the well, the network and the platform itself, right?
Yeah. Because what I'm getting at is like, you know, we hear, you know, that sometimes just having multiple vendors is the worst, you know, the worst thing and that like, sometimes it's worth, you know, buying a product from some vendor just so that you've got like, this part with the same vendor, this part with the same vendor, this part of the same vendor, so that there's no passing the buck in between.
Like, because, you know, you're not the only interconnection platform when you're going head to head, like, is that something that you find helpful?
Yeah, look, we find that a lot of people, especially enterprises, have multiple vendors for different things, specialises in different things.
I mean, we also are a, you know, T1 IP provider in 200 countries, right?
We have customers that buy their transit from others and we look after their NPLS network.
Or we have customers that have global network with another carrier, but they have a port of Console Connect as part of their solutions, which enables that customer, one, to connect to cloud, connect to anything that they need to, that sits outside their NPLS network.
So, I think customers who have the multi -networks, it could be another fabric or anything like that, but being a carrier, we can do both.
We can sell or we offer carrier services as well as softly defined services all in one.
Yep. Makes sense. Okay.
Cool.
One thing I wanted to, let me find my slide here. There's a good, I was going to show two slides.
I'm going to show one slide, but this will, I think, make this point a little easier to make.
You know, when we sort of talked about the partnership originally, and we were talking about Magic Transit and Origin Pools and potential future offerings, one thing that I've sort of heard talked about since then is that, it's almost like, unless you're a truly cloud-native company born in the last few years, you've probably got some level of infrastructure on-prem and in at least one cloud, possibly two, and often increasingly two, because you might get mandated to have multiple vendors from your audit committee for whatever reason.
And so, one thing that we've found that customers are almost telling us is that, the nice thing about Magic Transit is that, because it's sort of protecting at the IP space, you can have the same sort of policies and the same protection across your multiple sets of infrastructure.
So, across your clouds and on-prem, which is cool.
But what struck me is that, that makes it even more of a sell to be on, say, Console Connect, where if, since we're already connected to you in all of these locations, and you're already connected to all of the clouds, then as soon as the customer is connected to you, basically, connections to all of these things can be turned up virtually.
Like, you get an L1 hit, which I think is just sort of a nice fit.
Yeah, and what we've done, especially over the last year, is really make sure that we've got that direct connection to our partners.
We don't use third-party interconnects.
We have that direct connect with AWS, with Azure, Google Cloud, IBM, Oracle.
But we also do that globally, right? So, from AWS in Joburg to AWS in Chicago, we need to connect to all their local regions.
So, customers can get to their data locally, but also internationally.
You could be an American company with your data sitting in IBM in Chicago, but you've got an office in Joburg, so you need to grab that data, right?
So, with a port into our network, you can actually go through our Layer 2 network and grab that data and pull it back.
Okay, cool. All right, we've got about, I think, three and a half minutes left.
So, I wanted to just sort of sum up and sort of make sure we're clear on a couple of things.
So, you know, who's this for? Who benefits from this partnership? And I think, you know, from a Cloudflare point of view, like we talked about Magic Transit, that's a really obvious one.
The edge to infrastructure delivery being, you know, private.
CDN customers who want their origins completely off the Internet and who want that sort of, you know, reliable, secure connection, they can use this.
Teams, I don't want to talk too much about. It's sort of more like thinking about what comes next.
Like Michael, when you sort of look at your sort of, you know, typical customer for PCCW, Console Connect and the partnership, any addition to this?
Yeah, very simple. I mean, talking about interconnections, we are growing.
As mentioned before, we're in 43 countries. Customers can get to us quite simply.
We have a community of enterprises but also carriers on our ecosystem. Customers can log in very simply, register on our portal, and they can check it out.
So, I did, yeah, let's check that.
So, if you're new to Console Connect, like ConsoleConnect.com, for existing customers, say, who are already on Console Connect or maybe who aren't on Console Connect.
And especially, we run a very much community in our ecosystem.
You can actually, if you're on Console Connect, you can actually contact the Cloudflare team on Console by private messaging on our ecosystem.
And of course, yourself and Tom are there and also some of your sales teams, where questions, interconnects, and you can certainly light up services.
It takes about 30 seconds to interconnect the two platforms.
Yeah, it's good to know. So, and for, you know, for existing Cloudflare customers, talk to your account team.
But the beauty about software -defined interconnection, everything's online.
You know, you do it yourself, but there's always a team. As mentioned before, we've got 59 offices around the world.
There's always a dedicated salesperson that can talk to the enterprises.
Sure. And then for existing Cloudflare customers of Cloudflare, you know, they'll have an assigned CSM or SE.
So, that's a good opportunity for them to get in contact.
All right. We're out of time.
So, thank you, Tom. Thank you, Michael, for joining. Thank you for hosting.
It's my pleasure. I think it's good to sort of, you know, cover things at a high level and technical level.
So, I was appreciative we could do a bit of both.
And thanks everyone watching. Looking forward to seeing you again on the next episode and looking forward to the partnership extending, Michael.
Thank you. Thank you for having me.
Thanks, Tom. Thanks, David.