Hardware at Cloudflare (Ep3)
Presented by: Rob Dinh , Chris Palmer, Steven Pirnik
Originally aired on August 31, 2020 @ 6:00 AM - 7:00 AM EDT
Learn how Cloudflare powers hardware across 95+ countries.
Original Airdate: July 21, 2020
English
Hardware
Transcript (Beta)
Good morning everybody. This is Hardware at Cloudflare episode 3. This is your friendly host Rob Dinh, your Systems Reboot Engineer at Cloudflare.
I am joined with Steve Pirnik and Chris Palmer, Hardware Sourcing and Infrastructure respectively.
And today we're going to be talking about power. How do we power Cloudflare?
So if we didn't have, you know, if there's no Cloudflare without hardware, there's also no Cloudflare without the power that powers those hardware.
And it's something that I feel like we're in a pretty unique position in the industry.
You know, for the sake that we have 200 POPs or more than 200 POPs in more than 95 different countries, which means there are all these standards of power that we have to respect.
Each color has different power profiles. Each color also has different amount of racks, different amount of servers.
And we've talked about in our previous episodes that we have different generations of servers that require different type of power.
So we are here joining with Chris and Steve, if they could further explain a little bit more on how do we do that and how do we manage all of this kind of stuff.
Do you want to go ahead and share the slide deck, Rob?
Oh, yeah. Yeah, just turn it off.
Great. While we're doing that, I'll just give a brief intro of myself. My name is Steve Pernick.
I work on the hardware sourcing team. I work very closely with the SRE infrastructure engineering as well as hardware engineering.
So some of the topics. So if we go to the next slide, Rob. As Rob alluded to, we're in more than 200 cities today, and the breadth of that is over 95 countries.
And you can see that a lot of different countries have different power profiles, different rack densities, different safety electrical requirements for each of those locations, which adds a very compounding complexity to kind of the environments that we deploy in today.
And if you can see, I pulled some data sets from the AFCOM as well as 451 Research.
You can kind of see that after pulling a lot of data centers that the rack density, which also equates to kind of the different profile of what kind of power delivery you can have, be that AC, DC, single phase, three phase, and some variance in between.
You can see that a lot of these co-locations kind of move up to the four to 14 kilowatt range and then taper off on the sides from there.
I colored the chart on the left side here a little bit.
So if you kind of take a look at the four to six kilowatts, I highlighted this one purple.
So I would label this one as kind of the Cloudflare traditional rack density.
So when you think of co-locations, you might think of like three, four, five, six different servers into a rack and several, you know, four different kilowatts.
I would say that as the trend has grown to increase rack density with co-location providers due to evolving technologies and higher TDPs, Cloudflare has also as well started to shift its focus to the green section here, which is the seven to 10 to 11 to 14 kilowatt.
And as that trend of rack density continues to go up, Cloudflare will also shift its focus to that and then make design decisions around that.
We could go to... Just wanted to stop, just to try to clarify.
So that 33% between four to six kilowatts. Sorry, Rob, I didn't hear that.
Typically, is that 33% of our, is that 33% of our colors in our fleet?
Oh no, apologies. This is, this is a survey from data centers.
So they were surveying what is the rack density by provider. So they, 33% of the feedback was that this is their primary rack density.
So it's just a comparison saying that this is where we traditionally focused.
And then the green is where the industry is starting to focus at as well as where Cloudflare is starting to focus at for rack density.
Right, right, right. And we're, so we're trending towards more powerful, more power hungry machines as an industry.
Exactly. If you take a look at like things like CPU, as well as accelerator TDP that continues to go up, which will equate to essentially higher, higher power per box.
Okay. Oh yeah.
So just before we continue on, if you have any comments or questions regarding anything that we see on the slides or whatever we say, feel free to email us at livestudio at Cloudflare.tv.
Those questions again be forwarded to us and we can do our best to answer them for you.
All right, before we move on to the next slide.
So what do all these rack densities, connectors, certifications, and everything kind of equate into?
It equates into this next slide. So Robbie, if you switch to the next.
This essentially means we become hoarders of power connectors. So you can see that there's A multitude of different connectors stemming from kind of like IEC standards to NEMA US UL standards.
So there's just a whole host of kind of configurations and offerings that we have to choose from, as well as make design decisions around to support the kind of the global footprint that we have today.
If we go to the next slide. My first reaction is when I look at this, it looks like my drawer full of chargers.
Yeah, pretty, pretty much. Different adapters and everything.
We can sort of imagine this is how we have it in our data center.
And just something to organize, I guess. Right. Yes. And then if you kind of look if go back to the last slide actually.
If you think about like this, those familiar with the issue of kind of skew proliferation, which is as you increase You know optionality, you kind of increase amount of skews that you have to house.
Cloudflare had a traditionally skew prolific event with PDUs to where we Had up to 62 models of PDUs, I believe.
So each one had different feature sets, different connectors, different outlets, different power monitoring capabilities.
So each of those kind of just increases the complexity of, you know, what you can ship to where in a given in a given environment.
Yeah, I can only imagine. It's actually really hard to manage all of this because it's not like I can just keep track of how many outlets are in every PDU at every colo or something.
Yeah, it becomes it becomes extremely difficult, but I think Some of the great work that Chris has been doing, which he'll touch on later has helped resolve some of this complexity as well.
So if we go to the next slide, you can kind of see How our power footprint globally is broken out.
So no percentages. Sorry, but we can just kind of make references off the size of the pie chart here.
The right graph right top top right graph essentially splits.
When we look at our network. I want to say we're about 98% AC power.
So when we look at that split and we look at single phase versus three phase.
What is the split between that. And as we go reference back to the first chart you saw that, you know, Cloudflare traditionally was in that four to six kilowatt rack range, which is your single, single phase power Profile and as colo density has continued to increase Cloudflare has shifted its focus into more dense environments for less footprint.
Which is kind of starting to increase our three phase power footprint.
And then if you take a look at the left side graph here, you can kind of see how that's broken out by connectors and you can see that there's I think there's currently nine different connectors in our fleet today, which used to be a lot more, but we thankfully have been able to reduce By setting putting standards set forth, but a lot of these different connectors come from global certification safety requirements as well as different Colo offerings.
So if we go to the next slide, you'll see why we have a lot of different connectors If you take a look at the slide on the right, you can see that a lot of these colos have different voltage offerings.
So primarily in the US, you'll see things like 208 volt for your three phase power delivery or 120 and then when you kind of look at the left side of the graph here, you'll see Your 230 400 and your 230 volts, which is primarily for international kind of EU locations as well as kind of the rest of the world, I should say, actually.
And then when we look to the chart on the left, you can see because kind of cloud operates in Such a vast environment.
We have a lot of different rack designs and types. Sometimes we have to adhere to the co location standards.
So a vertical PDU is not always Something we can fit into a space.
So we have a segment of our network, which is actually dedicated to horizontal PDUs.
You can kind of see how the two are split out there. Yeah, I'm trying to think about it because I was, I was trying to think, you know, back in the days I've had that decision to and we've talked about it, Steve, like a few years ago about horizontal versus vertical For some reason, I think if I remember correctly, it made more sense for us to go horizontal For the amount of service that we have.
And also, you know, Just the way we were designing our racks and how we stacked our servers.
It looked like it made more sense to have horizontal.
So it looks like we still have majority vertical here. Yeah, and I think Chris will have some strong opinions on this, but to kind of harbor back to some of our older blogs, we released it for those familiar with our Gen 9 9.5 systems.
We used a box, which was very, very large and heavy and it had a long depth of I think about 38 to 40 inches.
So having vertical PDUs at the back of a rack when you're trying to plug in, you know, power cable into it, you know, when you try to service the box from the rear, you now have a bunch of power cables in the back of it so you cannot, you know, fiddle with it so Oh, do you have a photo of it?
No. Oh, I see.
So what happened is we we ran into this problem. So we switched to horizontal PDUs to alleviate kind of the connection.
The alleviate kind of the connector problem.
So you could easily work from, you know, a blade chassis from the rear without bumping into power cables, but that brought forth new issues which I'll let Chris kind of touch on Yeah, we now deploy both and many co -locations we do vertical and horizontal on the same rack to make the rack more dense.
And I guess as a four way resilience then we share the servers and network gear across four PDUs, like two A's and two B's.
Just so we can go more denser and save some money on rack space as well.
Some of your co-locations don't like that because it interferes with their calling.
Actually, we have a limit of calling as well.
It's not just power in the data centers. The majority will go vertical and then we'll add a horizontal as a second primary PDU to go to make us more denser.
But the problem with the horizontal PDUs is the size of them. You know, we try to stick with a 2U model.
So when we deploy 2 it's a 4U, it's taking up the rack space.
Problem that it restricts us by the amount of ports of that PDU. I believe it's a 24 ports, the current standard ones we deploy.
And the vertical ones we have 30 as a minimum.
So you can see we're six devices that we couldn't power off.
The horizontal, we use that as a standard to the vertical. So hence why we use a vertical now, so we can go more denser in co-locations.
How about kind of the, when you're looking at horizontal, Chris, and you're looking at a three phase horizontal.
What are some of the limitations that we've encountered over the last year, you would say?
Oh, the break is the biggest one. There's still 20 amp breakers.
So if you got a 32 amp PDU, three phase, that's 12 amps per phase that you can't use.
Just because of the model, how they do it and the engineering behind that PDU.
So certain PDUs is the plug is rated to 32 amps three phase. So it's 96 amps, but we can't actually use that.
So we have to be very careful when we pick PDUs and the models and the specification and where we're sending it and the amount of load that we're going to put on it to make sure that we don't trip out on a breaker.
And that's the last thing we want to do is have an outage because of a simple item on a PDU.
Yeah, I was always wondering about it because to me, it made more sense like how Steve was talking about, there was a lot of physical obstruction.
We're using vertical PDUs just because of how deep RT or 4Ns were for Gen 9s.
And you can talk about from like Gen 6 to all the way to Gen 9, which is I think it's still like a big majority of our fleet.
And when we try to shove these servers into Kodo, so the Kodos that we have, they're mixed from having our own cages, depending on the load.
And there's going to be some Kodos that have lighter loads that only requires just one rack and that one rack can actually be located in one of those shared space locations.
You can have those in rows and your typical rack is what 42 inches deep.
And I don't know how to make the math to make this into the correct unit.
What is it? 800? I forgot. 120 by 1200 mil by 800 mil. Yeah. I mean, essentially, our chassis was almost as deep as the rack themselves.
And for us to be able to try to shove more space in and back with vertical PDUs that will occupy the whole entire elevation or the height of our rack, this didn't seem to make any sense.
And when we try to rack and stack, there was a couple of times where we had to actually instruct the technicians to actually bend a few things.
Some of them, I told them, you can just bend those rails a little bit, just trying to get fit or whatever.
I just wanted to try to do away, but I guess it does make sense. If anything, it does surprise me that we needed that many outlets.
It still requires to have vertical though.
Yeah, I would say some of the outlet requirements, though, kind of stem from the different generations and types of hardware we deployed.
Because if you look at kind of outlet profiles, you have your C13, 14, 19, 20, and then you have sometimes your kind of L, L5, 15 connectors, your general practice use ones.
So that was a limiting factor because, as you know, when we moved to Gen 9, 9.5, we changed from the C13, 14 to the 19, 20 configuration, which means we had a large fleet of PDUs, which focused on the 13, 14s with only about six of the 19, 20 configuration.
So that was also a limiting factor. Yeah, and just to try and make it clarify for the audience, C13 is sort of your more standard or your more typical outlet that we use in hardware.
I don't think it's just for Cloudflare either.
I think for many companies, they use C13, C14. That looks like your rice cooker power cable, is how I think about it.
Or your PC. Yeah, your PC has a C13 power supply.
And our servers used to typically take, oh shoot, if my memory serves right, 1200 watts for across four nodes.
And to have C13 power supplies redundant was enough.
And when we came out with Gen 9, the jump from Gen 8 to Gen 9 was to double the cores in our CPU and that multiplied our CPU power range from, I forgot, but it was a 1.5x multiplier for CPU.
And if you were to try to get it up to more to a system level, that 1200 watt to your 4n in Gen 8 kind of jumped up to almost 1600 or almost 1800 if we wanted to do some power stress, which kind of I have a bit of a story about that.
But that didn't require us to upgrade our PSUs from something that was rated at 1600 watts.
And we were not going to have a design, a server that was going to do above 1600 watts and then let ourselves use 1600 watt PSUs.
So the upgrade was we wanted to go to power supplies at 2200 watts and that required to have from C13 to C19, which is much more less usual, more expensive power supplies, I think, right?
It was harder to source, if you can correct me on this one too.
Even the cables themselves were actually harder to source and the PDUs also didn't have as many of those outlets, or at least there was no, there was a smaller market availability of those.
Yeah, it's also, it's got six ports on each PDU.
So it means we had to send six servers to most locations until we had to add extra WIPs to get extra PDUs in there.
Or, you know, some of the times you just had to get another rack to send more hardware there to give us more capacity.
Was there more to these slides here?
From my end, I think, you know, kind of also give kudos to the industry when realizing that there's these kinds of issues for, you know, cloud providers like ourselves.
You know, kudos to server tech and companies like Veritas slash Geist, you know, for doing the combined connectors to where you can do, you know, your 13 or 19 connector into the same slot.
It really is a, it's really a big game changer and kind of saves a lot of headaches for us.
Because we got into these issues, right, where we go generational iterations between power profiles.
What are you saying, we have cables that have one end, a C13 and the other end is like C19?
No, so I mean, unless you pull it up. So PDU manufacturers like server tech and Geist, they went from having just the traditional like C13, you know, connector to to essentially allowing you to plug in either the C13 or 19 cable into the same outlet.
So you never had to worry about, you never had to worry about, do I need to retrofit a new PDU?
Because all you would have to do is just either remove an adapter piece and plug in the new cable.
That equates into less outlets overall, but you know, given kind of the power footprints in the data center, your, your plug density is rarely an issue.
It's more along the lines of what do you have the exact outlet that you need.
Right, right. I guess I can only imagine with the trend that you mentioned earlier that PDUs are going to be more moving forward to have like more C19 outlets then.
Yeah, and I think part of the, I think, realizing that trend to a lot of, you know, server tech, Geist realized that trend and they moved to this kind of universal connector for that purpose.
So you can connect higher amperage devices and still, you know, operate safely.
Oh, okay. I'm just trying to move on to the slides that we have here.
Let's go ahead and back to share those things here. I think the next one is where we hand off to Chris and I stop talking now.
Yeah. Powering one, two, nothing, yeah.
Hello, Colo. So we, we made our own dashboard. I know a lot of companies make their power dashboard and companies will invite third party, but we made our own to monitor our own power at every single PDU.
As you can see, this is a rack.
This rack has four PDUs. You can see the phases of each load. You can see down the bottom right hand side, the amount of power pulling out of each breaker, each branch, the average to max and the current levels.
And then at the top, you can see the peaks in amps, kilowatt hours, and the large usage in kilowatts and the peak at kilowatts.
And that helps us overall plan for capacity and allows us to see usage on, you know, when co-location have power maintenances and, you know, they're going to lose primary feed or redundant feed and we can see how the other PDU will pick up the load and make sure it's all balanced.
As you can see, we spend a lot of time on power and trying to get things balanced pretty well.
You know, 0.5 amp, I think this is mostly out of this whole rack is very well balanced.
So we take a lot of time and a lot of research that has been through the years of learning the power draw of all the devices and servers that we deploy in these locations.
This dashboard is still in its working stages.
I've created it, we hand it off to SRE and then fix all the bugs and to get it working 110%.
But yeah, this is pretty good in helping us learn and develop across the globe that we're in.
So, yeah. I think, Chris.
So, you know, my understanding is a lot of actually companies rely on things like DCIM and, you know, which kind of limits what you can, you know, the data extraction, as well as data visualization, you can do.
So I think, you know, if anyone watching is actually interested in kind of deep diving on how we collect these metrics and, you know, what our kind of what our data collection methods look like, you know, please comment and let us know and then we can do a follow up because I think that's a great idea.
Because I think this is extremely important to a lot of maybe like colocation providers, small shops, you know, that don't want to spend a bunch of money on some kind of DCIM solution, but they want to, you know, get actual quality data so they can, you know, make growth plans themselves.
And it's also good to understand when you're paying for a smart hands to remotely install your equipment that they're not just overloading one phase or just one breaker or two breakers.
We can actually see here the branches. So, a PDU can have many different branches and each branch is generically on a 20 amp breaker.
So you get to see the load. So it all depends how your power topology is. So you can have everything fed off your primary feed.
So when primary feed fails, the redundant feed will pick up the whole load.
We tend to break it over to 50% over primary and redundancy.
And so we do a 50%. So we want our breakers to get more than 10 amps, because otherwise when we fail over onto the other PDU it will trip the breaker and we'll lose that many devices on each branch.
And that's normally around about six devices that you could lose, sometimes 10.
So that's a lot of capacity and a lot of devices losing just because we didn't power monitor, understand the draw on that PDU.
So we always double check every time we do a deployment remotely.
It's installed in the correct phase and the correct socket. So it matches our database.
Yeah, I think that's actually really cool. When I think of something like this, I do think of DCIM solutions, things that we can look outside and just have this implemented into a whole entire fleet.
My first thought would be, well, how could this work with 200 plus colors that we have?
Is this something scalable?
So what we're looking here is one rack, is that correct? Yeah, this is just one rack, yeah.
Okay. And I think it does deserve maybe another episode to talk more and deep dive into how we built this tool.
But can you just give us a little bit of a summary of how we built this thing?
How did we make this possible that it's something that's homegrown to us?
So we use, there's a guy in SRE called Dan, I've forgotten his last name.
He's based in Singapore. So he pulled out the metrics and really went into what the server tech PDUs can do.
And we managed to find out that we can pull out all this information from the breakers.
So we use Prometheus to put the data in. And it all has an IPv6 address in the PDU.
And then it comes into this dashboard. And then we do the maths behind the dashboard to sometimes, you know, convert it because obviously 208 volt and 230 volt are different in maths because of different power topology.
And that's how we get the result.
We can dive much, much deeper in that, but I probably bore a lot of people by going into all the depth of how that's done and the matrix behind it and the coding and everything we've done to make this work.
Yeah, it's, I mean, I see this more as a, as a monitoring.
So as in, well, everything is has to look as the way we expect it.
And if it's not, then at least we have this, you know, this tool.
Maybe, maybe even the luxury that we can look at, you know, maybe a breaker went down or maybe a power supply went off for one of our servers that we can see this happening, like a certain time.
And, you know, how, you know, how do you use this?
So when we try to look for like a, like a rack design, right.
So how can we figure, hey, this, we have, we have, we have too many servers in that rack.
Maybe, maybe we should provision another rack or Yeah. So we basically brought one out by the whip and then we have a design that we work off.
So depending on the power and the delivery from the colo, say it's a three phase 32 amps, then, you know, we'll spread that load over the, over the phases.
If it's a single phase, it's very easy to overload a rack because it has more ports.
It still has 30 ports, like a three phase PDU has. So people can then mistake it, you know, the PDUs and the racks and accidentally rack it in the wrong rack and plug it in because it's still going to get power because it's 50%, right?
But we don't operate 100% of each, each PDU. And then when we go to deploy some more servers, we always check what's in that rack and look at the, go on the dashboard and look at the power draw and then figure out if there is a problem or if there's not a problem.
So this will tell us, you know, we really know the model of the, of the rack wire database.
And so we really know the limit. So this one that you're looking at is a 16 amp three phase PDU.
So we know we've got plenty more power there.
And then we'll go there and work out the maps, see how much power, see any devices and what the, what generation we're going to send there and go from there.
I would also say that, you know, from kind of an engineering perspective, right, this is an iterative improvement in the sense that in order to, you know, have a solid understanding of your rack designs and how they change from location to location, because traffic profiles are different, this tool really feeds into like a constant improvement or kind of Kaizen -esque mentality of like, what is the expected power draw and how do we optimize kind of efficiencies between branches and phases?
And things like that. And as well as also this, you know, tools like this are extremely important to like our operations teams when they look at like colo expansion planning as, as well as even just auditing colocations.
And a lot of, I think, companies, cloud companies might take at face value the report that they get from a colo provider and say, this is correct.
Just make the payment. But, you know, and not having the ability to monitor or audit themselves.
And, you know, through having tools like this, we've actually been able to audit and say, like, you know, actually, this was not correct.
Why is this built like this? So it's, you know, engineering is very important, but also kind of operational and how you manage the growth of your company is extremely important as well.
Yeah, I could see that there's going to be some sort of decision that we have to make, let's say we're going to expand our colo, the demands in that colo is going to be much higher, and we need to fulfill that demand.
So like looking back at our example, then, like, we can look at, you know, how much there's overhead left, right?
Or we can decide if we need to have more racks, or maybe upgrade the PDU? Is this a thing that we can do?
Sometimes it's 18 kilowatts, and sometimes it's 25.
So it all depends. And then we have all different call-in.
So we have colo containment that we 90% deploy around every colo that we're in.
You have a full containment or you have a bath. Now a bath is exactly the same as a full housing or full colo containment, but it has no ceiling or roof.
It just acts like a bath in your home, where it keeps the cool air in all around the server and lets the hot air just drift up to the top and go out.
And that's all down to the colo design of the call-in topology and how that works within that location.
But sometimes we hit some circumstances where the data center cannot upgrade, and they can't give us three phase for the next rack.
It has to be single phase.
So sometimes we just have to work with them, and sometimes again, or we have to put four PDUs in one rack to make it more dense and to get where we need to be.
Right. So it's sort of your preconditions that you can't ask the colo to do.
We can't just say, can you just make it cooler or something. Yeah. So we obviously have limits and they have a limit to a room.
There's one location, you know, we have, their call-ins are 120 kilowatt capacity per crack unit, which is 34 RTs, radiation tonnes.
So we know our call-in limits and we can get three of them in a hall.
So that's 360 kilowatts and, you know, 102 radiation call-in tonnes that we can have.
So it all depends on us and how we want that power delivered and the rack density and how we see our future grow.
And that's how we make our decisions.
We try to work with the colo to understand our future goal, as well as their limitations, so we don't block ourselves in the future.
I have a question for you, Chris.
Yeah. How does Cloudflare handle DC power? Because I know that exists out there.
Yeah. So we have a few sites with DC power now. I think maybe 15, correct me if I'm wrong, maybe 15 sites.
So we use a DC to AC inverter. And the one we currently use has free supplies, has free 63 amp supplies to it.
It gives us a 4.5 kilowatt or roughly a 20 amp limit that we can use.
That also has a smart function where we can add that onto the dashboard and see a live inverter and see the power draw off it.
So we still learn with DC as a company and to work with the co-locations, because it's not the normal.
It takes a lot of effort. Some locations will request us to work out the length of the cable and the size of the cable for them.
So we get the actual draw. DC is the longer the length, the more current you need through and the bigger the cable to get you the actual power delivery at the cage.
If it's a foot away, you need a smaller cable. If it's 20 foot away, you'll need a bigger cable.
So we work closely with them to understand the limitation.
Some of our network has DC power supplies so we can hardware straight into the breakers of the co-location PDU.
But yeah, I'm interested to talk to companies if they have other inverters out there and technologies they want to share with us that are rack mounted.
What region do you primarily see DC power in when you're doing site planning?
Brazil is the biggest one at the moment.
Brazil is quite a difficult one. It all depends on the company as well and whereabouts and what we're doing with that data center or that co-location depending on the power.
I'm trying to understand a little bit.
Does it look like we're going to run into a situation where we have a mixture in a rack of DC and AC then?
Yeah. And we do have that sometimes because we could be the last customer in that co-location.
So we have to take what the co-location can give us.
We're there for different reasons for connectivity, trying to get closer to our customers.
Obviously price is always another one.
So for this season, how we rate them. So sometimes we have a mixture. We have sometimes a rack of DC power and next to it we have a rack of AC power.
We try not to mix the two powers into the same rack to manage it, manage what server is plugged into what and make sure that everything is done the way we requested it to done.
As many companies out there, you can request something and sometimes it doesn't get delivered the same way.
But again, we're still learning. We're still developing.
I would say from a sourcing perspective and then working with the hardware engineering, network engineering groups, we've made some strategic selections around having equipment that supports DC power.
So like having DC capable power supplies.
So if you think of our current generation systems, the Gen X, it's a smaller footprint box consumed less power overall, which allows us to actually swap out AC, DC power supplies.
Whereas if you look at the previous Gen, Gen 9, the power consumption was too high for the current market availability of a DC capable power supply.
So we've made some decisions to, you know, kind of design out the need of inverters, but we still have a lot of this kind of legacy equipment that's out there that still requires them.
So older legacy fixed placement networking equipment, which doesn't have removable power supply.
Kind of, you know, very similar, similar ESCA devices out there that we need to, you know, slowly go through and pull them out and retrofit them so we can get rid of things like inverters hopefully one day.
Oh dang. It's actually new to me. I had no idea that we actually use DC.
I always thought we were just all AC and to think that we were probably mixing it in the same rack, I don't know.
You guys already explained on this one, but I guess it is interesting if we have, you know, one rack, that's an AC and in that same colo, I assume the rack is sitting right next to the AC rack that we have a DC rack.
Does a colo provide the inverter or is this on us to find a way to convert?
We have to source and buy the inverter. And we've had cases where it gets shipped damaged just because of how heavy it is and the size of it.
And they can be quite hard to resource again.
And there's a lot of lead times, correct me if I'm wrong, Stephen, to get these.
And they cost a lot of money. So we don't want to keep massive amount in stock because of how much they cost.
But we now have a new system and a new way we make sure that we have enough and we can then grow.
The model we use, you can add a module and basically each module is around about one kilowatt and it has four modules.
So we can send it out there with two modules or one module to get us going.
And then as we deploy and grow, we can then send more service and send another module to keep us going.
So then we don't have the problem and we don't have to send everything at once.
We can then just grow with the same inverter.
And I guess, how do we get ourselves into that kind of situation, right?
I mean, it would be hard for me to just remember that this rack is a DC rack.
Or like, why did it have to be DC if the other racks are AC in the same color?
Is this like a flexibility thing?
Or is this like a something that the color just said, No, we're good.
We're just gonna have to go for DC or? Sometimes it's just what they got left.
Yeah, so we can be the last customer in that color. Right? Yeah, you know, two, three racks left.
They've got enough of one, one, you know, AC rack and the rest has to be DC.
And we don't want to have to move because sometimes it's really important for us to be close to our customers, given the privacy and the connectivity that we strive for as a company.
So, you know, we've learned to develop and to work with the co-locations, or to make that happen for us and then for our customers.
We don't want to, you know, we don't have to not be somewhere because we can figure out a DC power.
I would, I would say, when you look at Cloudflare, like we're one of the most interconnected Internet exchange populated companies on the planet right now.
So, you know, putting equipment that's, you know, 60, 80, 100 miles away is not, you know, an adequate service for our customers.
So whatever, whatever kind of accommodations we can make to the colo to further expand for whatever is available at the time is essentially the mentality that we take, right?
Because we want to make sure that the customers are getting a constant kind of like a SLA driven kind of performance out of what they expect when they sign up for Cloudflare.
Yeah, yeah. I mean, that goes well in line with our philosophy with hardware, right?
We're not tied to a certain technology.
So it does make me smile that we can actually do that. That's, that's super cool.
And I assume that that also extends to network gear too, if anything, maybe more, especially network gear that has DC.
So is there something like in, in your world, Steve, that you have to ensure that the SKUs are going to be correct or, you know, that you have the correct cables and like, is, is DC or AC when you do rack design any different?
Like, like if we go back to the screenshot, Chris's screenshot about, you know, the phase one, two, threes and all.
Is this going to look different?
Or like, how do we, how are we mindful that this is going to be a DC rack and we have to handle it differently?
It's going to look different in regards to what your power delivery unit looks like.
It's not a traditional PDU in the sense because your DC is technically, or typically I should say, you know, two connectors, right?
You're positive and negative, and it's going into a shelf or a box.
And then that box is essentially just looks like a really large horizontal PDU.
And then you put your connectors into it. So it's not the traditional, traditional kind of, you know, plug a cable into something because that's just not how the terminations work on DC power.
So physically, it looks different from a monitoring perspective.
You know, as Chris mentioned, their, their network connected, we can monitor them on the network, just like any other PDU, and then we can make modifications as needed.
I would say, though, that the baseline requirements for working with AC versus DC power is significantly different in the sense that you run a higher risk of, you know, fires, accidental accidents, as well as, you know, possibly leading up to death if you improperly handle DC power, whereas AC power, plug in a cable and just leave it for the most part.
That's something that Cloudflare's had to do a lot of planning around to ensure that when we're deploying this stuff out in the wild that whatever system we're using, it's completely safe and, you know, only trained personnel are kind of installing it at the site.
So we take a little extra time to say, does your site understand how this works?
And do you have the proper equipment and tools in place to, you know, install this equipment for us?
Yeah, yeah, I'm still, I'm still trying to wrap my round as I'm just learning this stuff.
Because we have voltage is going to be different, right?
Yeah, so it's normally 48 volt. And that's where you have the big amperage.
So it's 60 amps, 63 amps. That's where you have free supplies is to convert the power then back into AC, which will be converted to 230 volts.
So it's a, it's a big difference.
It's a big difference. And like Stephen was saying, the connectors, they're hardwired in and into the inverter.
So, and as the inverter is a 2U inverter, so we always make sure there's a U gap on top of it.
So the cables got enough bend radius and server doesn't get involved or a technician doesn't scrape his arm in it if he's then got, you know, power recycle a server, we try to think of all these things when we're deploying and doing a rack elevation for DC rack.
Just for the safety amount and for the network outside, we don't want to see an outage because we are some to lean in and they not maybe for some reason the inverters, power cables are loose, they're not the inverter, and it brings down the whole rack.
We don't we don't want that. So we take precaution and leave extra gaps for this.
Okay, I mean, I guess it could be a nice little segue to talk about redundancy.
So I can only, I can't, I can't assume that every color sort of advice to a certain standard of power.
When you have 200 colors, some of these goals are like way out in the corners of the world where standard is probably not going to be like your, some of the bigger ones that we have, you know, in Europe or, or develop countries like somewhere in Asia, if you're an American, so do we have policies when we select colors, or how are we going to power them?
So let's say this color just doesn't work for us.
What are those checkmarks? Yeah, that's our TLDR on this one.
Always audit your colocations provider single line diagram never trust what they tell you at face value too many times will they tell you something's redundant, but it has a single point of failure in the chain.
So we spot is a few.
I do that as part of my job. So when we're looking at a new location or new contracts up for renewal, or ask for single lines of the power topology the calling topology, and we'll go through the maintenance locks.
And then, you know, some days, and as we'll do n plus one to the main switch room, and then off the main switch room to PDUs and to rack and cage and whole levels will be a single point.
But be off the, you know, you'll have to feed off of their same UPS or the same board they're feeding from to your rack.
So your rack technically has two feeds, but their back end UPS goes down, you lose then both your power supplies.
Yeah. We now do them, you know, one of my jobs is to do the power topology and read and look into all the data centers topology before we sign a contract and make sure that we're happy with how it's rated, and it's also depends on the certificate into what tier.
That's a huge thing. If they spent their money and got a tier rating, it normally then secures that what they're giving us is right, but we like to be double, triple check that before we sign a contract that that is the case and we're paying for what we're getting and not unspecifically going on, we're going to have a power outage, you're going to lose all your power feed to your current cage.
Sorry. Yeah, yeah, I don't think it's something that we can just simulate to right, we can just have this on a graph on a graph and says, that is gonna fail.
I mean, do we do we have power, like, can we push back on a code like hey, we can't set up shop here.
Unless you do these things, or it's just sort of It all depends.
Legacies, legacy sites and depend on what we got a contract.
Sometimes we work with them and Carlos are very interested, and I want to, you know, change and develop and understand why there was wrong, and sometimes they didn't have the right stuff at the time that understood the power topology because sometimes other companies will buy a legacy data center.
So, you know, the left and what they got.
So then we help them, or we give them advice, we give them our opinion.
And sometimes they nine times out of 10 they do change, it's not a five minute change, it takes a long time, but working with them and understanding that are willing to change and are spending the money to make it say n plus one or n plus two, more redundancy or have a better BMS system, have a better portal for us.
Is a great thing for us to do, and especially work with locations. Hmm.
Okay. Okay. Um, so we got 10 minutes left. We have a couple of questions that we can go back and answer, but I do want to make sure that we actually also go through our slides here.
So I'll switch it back to sharing the screen. And, you know, there's something that you can help me walk through here.
So this is a screenshot of our Life graph.
And What are we looking at here. This is one of our PDUs. This is a 50 amp PDU, two eight volt, three phase.
But it's actually restricted by the breaker.
So that becomes a 40 amp free phase PDU. So each phase or each line to line has two breakers at 20 amps.
And we use around about, I think it's like 32 ports in this PDU.
I think it has 46 ports. And yeah, this is one of our highest densities in the US is we draw around about 14.3 kilowatts and we can draw 14.4 with this PDU.
Chris, question for you. Why is it important to monitor the CB on each of these for three phase line to line configurations.
What do you mean the CB? The breakers. Yeah. So, again, back to the point where we actually do both PDUs are primary.
So in an event that we lose a PDU or the co-location has a power maintenance where we drop to one feed, we make sure it is balanced.
So we want to make sure that that breaker can handle the load once it fails over.
So again, we don't go more than 10 amps per breaker per branch on these PDUs.
So you may have phase one at drawing, you know, five kilowatts and phase ones are drawing one kilowatt because it's not balanced, but you'll sometimes get build on the heights phase.
Yeah, it's very efficient and good for us to save some money just to make sure that we balance PDUs.
Yeah, it's kind of like having a bottleneck is what I see is that you have a whole system and then you're only as slow or as inefficient as the most inefficient part of your system.
And so you have three phases that you're handling.
And while they all could like You know, satisfy to power all the hardware, it sucks when you don't have to use as much power as you need to.
That looks like a phase.
Yeah, that looks like a three phase sine wave. Is there much to talk about on this one?
No, it is what it is. It is what it is. We can't bend nature.
No. Yeah, I included this one from the Uptime Institute from their 2018 survey because it kind of, you know, when you look at kind of redundancy as a whole, you realize that power and network are the biggest proponents to downtime for the most part, right?
So 33% You know, power related and 30% network failures, which also can be power or single point of failure related.
And so I just I included that in there for reference to on distress, the importance of the topic.
Yeah, yeah, for sure.
And when we talk about outage, at least for us in hardware outages is a power outage.
Yeah, there could be many kind of outage like the ones we had last week.
But for us, it's it's one of those things where for us, it's really important.
So like I mentioned, like, in the very beginning, you know, there's, there's no conflict about hardware and there is no conflict about power.
It's, it's one of those things that all of us at Cloudflare and software engineers, anybody else, they have to trust that our machines are on.
Essentially, like, you know, it cannot be part of our root cause to say what's the outage here, because there's different layers of where the outage could be.
And for us, you know, we're basically like a very, very low level, right, not just even the physical level, but those physical things just don't even do anything about any kind of power.
So, like, we try to not be, you know, part of that cause, you know, I've had some war stories where I've accidentally shut off a whole rack before And, you know, it's just not something that we want to make a thing, really.
It's supposed to be an assumption that we have our power taken care of.
Our rack design are, you know, in such a way that we don't go overboard either.
Exactly. So yeah, we can almost wrap it up here.
So there's a couple of questions that we can go through here.
I think we've hit them up, but we can clarify a little bit more. So for any of you that would like to answer, you know, how deep are our racks?
I think I said 42 inches.
I think they range typically between 700 and some odd millimeters up to 1200 millimeters.
So when we looked at the previous generation systems that we used a lot of 1080 to 1200 millimeter racks.
And those are just standard racks of the colors will provide us right we don't source racks.
Yeah, typically we don't source racks.
Typically we we abide by what the color has in place, just because if they have a cooling configuration or whatnot.
You know, it needs to be specific to their publication.
And yeah, I see this as more of a standard size.
Is there a variety of them when you look at a color like maybe the color is not able to provide you You know that dimension.
I can only assume you had we have a gen nine.
That's really easy, but we just can't even reckon stack them at all.
We can't do that. It's not me the width. Normally, the smallest one we have is like 600 wide, which is, is, is about 23 inches, which is not very wide, you know, for cabling and that, but it fits the server.
So it all varies on the location and the specifications, but there's many different sizes and specs of racks around the world.
There's, there's not a universe standard. This is it because every, every company and every color has his own specific needs.
Okay, and I think that goes for all the questions that I have.
So if there's anything.
Any other questions that you want to ask. Please go ahead. So for me, I'll start.
Is there anything that you kind of wish to see in your projects are implemented a call flare.
I'm thinking of something like an ATS maybe. Is that a good idea.
Should we push to do more DC or like have a bigger markup of DC. What do you guys think we're trying to leave ATSs alone because it's, it's a single point of failure.
Every single hardware we buy, we try to buy dual fed. But ATSs are great until they fail.
That they work fine they switch over from A to B, and they do what they need to, but as soon as that device has failed, you've lost both A and B feet.
Right, right.
It becomes a bigger problem, then, then actually putting one in, it becomes more of a hindrance.
And then we know when it's failed. Right. And you do a high-life recycle of X amount of years and you replenish it and there's nothing wrong with it and you're just doing it just for warranty, you can always get that one bang one out of the factory that doesn't last that long.
So to us is we, I don't think we would make we move forward.
Right. I would say previous previous employments, you know, ATSs were kind of a big, a big proponent.
But when we look at kind of the Cloudflare architecture, it adds just a level of complexity that can be resolved with dual fed and then doing due diligence with your, your power grid reviews of single lines.
Okay. All right. I think that's, that's all I have. If you guys have anything else.
We've got two minutes. Yeah, we got two minutes. I think we're, we're pretty much ready.
We're ready to close this off. So for everybody, the audience that are here.
Thanks for tuning in. Hardware Cloudflare episode three. It's me, Rob, Jim, Steve, and Chris.
It was a pleasure. And now maybe we can do this another time and definitely would love to hear more about Our homegrown DCIM tool.
All right. Mic off. Thank you.