Cloudflare TV

Hardware at Cloudflare (Ep3)

Presented by: Rob Dinh , Chris Palmer, Steven Pirnik
Originally aired on August 31, 2020 @ 6:00 AM - 7:00 AM EDT

Learn how Cloudflare powers hardware across 95+ countries.

Original Airdate: July 21, 2020

English
Hardware

Transcript (Beta)

Good morning everybody. This is Hardware at Cloudflare episode 3. This is your friendly host Rob Dinh, your Systems Reboot Engineer at Cloudflare.

I am joined with Steve Pirnik and Chris Palmer, Hardware Sourcing and Infrastructure respectively.

And today we're going to be talking about power. How do we power Cloudflare?

So if we didn't have, you know, if there's no Cloudflare without hardware, there's also no Cloudflare without the power that powers those hardware.

And it's something that I feel like we're in a pretty unique position in the industry.

You know, for the sake that we have 200 POPs or more than 200 POPs in more than 95 different countries, which means there are all these standards of power that we have to respect.

Each color has different power profiles. Each color also has different amount of racks, different amount of servers.

And we've talked about in our previous episodes that we have different generations of servers that require different type of power.

So we are here joining with Chris and Steve, if they could further explain a little bit more on how do we do that and how do we manage all of this kind of stuff.

Do you want to go ahead and share the slide deck, Rob?

Oh, yeah. Yeah, just turn it off.

Great. While we're doing that, I'll just give a brief intro of myself. My name is Steve Pernick.

I work on the hardware sourcing team. I work very closely with the SRE infrastructure engineering as well as hardware engineering.

So some of the topics. So if we go to the next slide, Rob. As Rob alluded to, we're in more than 200 cities today and the breadth of that is over 95 countries.

And you can see that a lot of different countries have different power profiles, different rack densities, different safety electrical requirements for each of those locations, which adds a very compounding complexity to kind of the environments that we deploy in today.

And if you can see, I pulled some data sets from the AFCOM as well as 451 Research.

You can kind of see that after pulling a lot of data centers that the rack density, which also equates to kind of the different profile of what kind of power delivery you can have, be that AC, DC, single phase, three phase, and some variance in between.

You can see that a lot of these co-locations kind of move up to the 4 to 14 kilowatt range and then taper off on the sides from there.

I colored the chart on the left side here a little bit.

So if you kind of take a look at the 4 to 6 kilowatts, I highlighted this one purple.

So I would label this one as kind of the Cloudflare traditional rack density.

So when you think of co -locations, you might think of like three, four, five, six different servers into a rack and several, you know, four different kilowatts.

I would say that as the trend has grown to increase rack density with co -location providers due to evolving technologies and higher TDPs, Cloudflare has also as well started to shift its focus to the green section here, which is the 7 to 10 to 11 to 14 kilowatt.

And as that trend of rack density continues to go up, Cloudflare will also shift its focus to that and then make design decisions around that.

We could go to... Just wanted to stop, just to try to clarify. So that 33% between 4 to 6 kilowatts.

Sorry, Rob, I didn't hear that.

Typically, is that 33% of our, is that 33% of our colors in our fleet?

Oh no, apologies. This is, this is a survey from data centers.

So they were surveying what is the rack density by provider. So they, 33% of the feedback was that this is their primary rack density.

So it's just a comparison saying that this is where we traditionally focused.

And then the green is where the industry is starting to focus at as well as where Cloudflare is starting to focus at for rack density.

Right, right, right. And we're, so we're trending towards more powerful, more power hungry machines as an industry.

Exactly. If you take a look at like things like CPU, as well as accelerator TDP that continues to go up, which will equate to essentially higher, higher power per box.

Okay. Oh yeah.

So just before we continue on, if you have any comments or questions regarding anything that we see on the slides or whatever we say, feel free to email us at livestudio at Cloudflare.tv.

Those questions again be forwarded to us and we can do our best to answer them for you.

All right, before we move on to the next slide.

So what do all these rack densities, connectors, certifications, and everything kind of equate into?

It equates into this next slide. So Robbie, if you switch to the next.

This essentially means we become hoarders of power connectors. So you can see that there's A multitude of different connectors stemming from kind of like IEC standards to NEMA US UL standards.

So there's just a whole host of kind of configurations and offerings that we have to choose from, as well as make design decisions around to support the kind of the global footprint that we have today.

If we go to the next slide. My first reaction is when I look at this, it looks like my drawer full of chargers.

Yeah, pretty, pretty much. Different adapters and everything.

We can sort of imagine this is how we have it in our data center.

And just something to organize, I guess. Right. Yes. And then if you kind of look if go back to the last slide actually.

If you think about like this, those familiar with the issue of kind of skew proliferation, which is as you increase You know optionality, you kind of increase the amount of skews that you have to house.

Cloudflare had a traditionally skew prolific event with PDUs to where we Had up to 62 models of PDUs, I believe.

So each one had different feature sets, different connectors, different outlets, different power monitoring capabilities.

So each of those kind of just increases the complexity of, you know, what you can ship to where in a given in a given environment.

Yeah, I can only imagine. It's actually really hard to manage all of this because it's not like I can just keep track of how many outlets are in every PDU at every colo or something.

Yeah, it becomes it becomes extremely difficult, but I think Some of the great work that Chris has been doing, which he'll touch on later has helped resolve some of this complexity as well.

So if we go to the next slide, you can kind of see How our power footprint globally is broken out.

So no percentages. Sorry, but we can just kind of make references off the size of the pie chart here.

The right graph right top top right graph essentially splits.

When we look at our network. I want to say we're about 98% AC power.

So when we look at that split and we look at single phase versus three phase.

What is the split between that. And as we go reference back to the first chart you saw that, you know, Cloudflare traditionally was in that four to six kilowatt rack range, which is your single, single phase power Profile and as colo density has continued to increase Cloudflare has shifted its focus into more dense environments for less footprint.

Which is kind of starting to increase our three phase power footprint.

And then if you take a look at the left side graph here, you can kind of see how that's broken out by connectors and you can see that there's I think there's currently nine different connectors in our fleet today, which used to be a lot more, but we thankfully have been able to reduce By setting putting standards set forth, but a lot of these different connectors come from global certification safety requirements as well as different Colo offerings.

So if we go to the next slide, you'll see why we have a lot of different connectors If you take a look at the slide on the right, you can see that a lot of these colos have different voltage offerings.

So primarily in the US, you'll see things like 208 volt for your three phase power delivery or 120 and then when you kind of look at the left side of the graph here, you'll see Your 230 400 and your 230 volts, which is primarily for international kind of EU locations as well as kind of the rest of the world, I should say, actually.

And then when we look to the chart on the left, you can see because kind of cloud operates in Such a vast environment.

We have a lot of different rack designs and types. Sometimes we have to adhere to the co location standards.

So a vertical PDU is not always Something we can fit into a space.

So we have a segment of our network, which is actually dedicated to horizontal PDUs.

You can kind of see how the two are split out there. Yeah, I'm trying to think about it because I was, I was trying to think, you know, back in the days I've had that decision to and we've talked about it, Steve, like a few years ago about horizontal versus vertical For some reason, I think if I remember correctly, it made more sense for us to go horizontal For the amount of service that we have.

And also, you know, Just the way we were designing our racks and how we stacked our servers.

It looked like it made more sense to have horizontal.

So it looks like we still have majority vertical here. Yeah, and I think Chris will have some strong opinions on this, but to kind of harbor back to some of our older blogs, we released it for those familiar with our Gen 9 9.5 systems.

We used a box, which was very, very large and heavy and it had a long depth of I think about 38 to 40 inches.

So having vertical PDUs at the back of a rack when you're trying to plug in, you know, power cable into it, you know, when you try to service the box from the rear, you now have a bunch of power cables in the back of it so you cannot, you know, fiddle with it so Oh, do you have a photo of it?

No. Oh, I see.

So what happened is we we ran into this problem. So we switched to horizontal PDUs to alleviate kind of the connection.

The alleviate kind of the connector problem.

So you could easily work from, you know, a blade chassis from the rear without bumping into power cables, but that brought forth new issues which I'll let Chris kind of touch on Yeah, we now deploy both and many co -locations we do vertical and horizontal on the same rack to make the rack more dense.

And I guess as a four way resilience, then we share the servers and network gear across four PDUs, like two A's and two B's.

Just so we can go more denser and save some money on rack space as well.

Some of your co-locations don't like that because it interferes with their calling.

See, we have a limit of calling as well as not just power in the data centers.

The majority will go Vertical and then we'll add a horizontal as a second primary PDU to go to make us more denser.

But the problem with the horizontal PDUs is the size of them.

You know, we try to stick with a 2U model.

So when we deploy 2 it's a 4U, it's taking up the rack space. The problem that it restricts us by the amount of ports of that PDU.

I believe it's a 24 ports, the current standard ones we deploy and the vertical ones we have 30 as a minimum.

So you can see we're six Devices that we couldn't power off the horizontal.

We use that as a standard to the vertical. So hence why we use a vertical now so we can go more denser in co-locations.

How about How about kind of the when you're looking at horizontal, Chris, and you're looking at a three phase horizontal.

What are some of the limitations that we've encountered over the last year, you would say.

The break is the biggest one.

There's still 20 amp breakers. So if you got a 32 amp PDU three phase that's 12 amps per phase that you can't use.

Just because of the model, how they do it and the engineering behind that PDU.

So certain PDUs is the plug is rated to 32 amps three phase.

So it's 96 amps. But we can't actually use that. So we have to be very careful when we pick PDUs and the models and the specification and where we're sending it and The amount of load that we're going to put on it to make sure that we don't trip out on a breaker.

And that's the last thing we want to do is have an outage because of a simple item on a PDU.

Yeah, I was always wondering about it because To me, it made more sense like how Steve was talking about.

There was a lot of physical instruction. With using vertical PDUs just because of how deep our T4Ns were for Gen 9s and you can talk about from like Gen 6 to all the way to Gen 9, which is, I think it's still a big majority of our fleet.

And when we try to shove these servers into colos, so the colos that we have, they're mixed from having our own cages, depending on the load.

And there's going to be some colos that have lighter loads that only requires Just one rack and that one rack can actually be located in one of those, you know, shared space locations or you can have those in rows and your typical rack is what 42 inches deep and I don't know how to make the math to make this into the correct unit and 120 by So that's 1200 mil by 800 mil.

Yeah, yeah. I mean, essentially, our chassis was almost as deep as the rack themselves.

Yeah. And for us to be able to try to shove more space in and back with, you know, vertical PDUs that will occupy The whole entire elevation or the whole, the height of our rack just didn't seem to make any sense.

And when we try to rack and stack, it was a couple times where we had to actually instruct The technicians to actually bend a few things.

You know, some of them, I told them, just the durability.

Yeah, you can just bend those rails a little bit, just trying to get fit or whatever.

Yeah, I just want to try to do away but I guess it doesn't make sense.

If anything, it does surprise me that we needed that many outlets.

That, you know, it still requires have vertical though. Yeah, I would say some of the outlet requirements.

So kind of stem from the different generations and types of hardware we deployed, because if you look at Kind of outlet profiles, you have your C 1314 1920 and then you have sometimes you're kind of L L 515 connectors need to do your general practice use ones.

So that was a limiting factor because, as you know, when we move to 1099.5 we change from the C 1314 to the 1920 configuration, which means we had a large fleet of PDUs which focus on the 1314 with, you know, only about six of the, you know, 1920 configuration.

So we that was also a limiting factor. Yeah. And just to try and make it clarify for the audience.

C 13 is sort of your, your more standard or your more typical outlet that we use in hardware.

I don't think it's just for coffee, either.

I think for many companies that use C 13 C 14 and that looks like That looks like your rice cooker power cable is how I think about it or your PC.

Yeah, your PC has a C 13 power supply.

And our servers used to typically take My memory serves right 1200 watts for across four nodes and to have C 13 power supplies redundant was enough.

And when we came out with with Gen nine to jump from Gen eight to Gen nine was to double the cores in our CPU and that That that that multiplied our CPU power range from God, but it wasn't.

It was a 1.5 x multiplier for CPU. And if you were to try to get it up to motor system level.

That 1200 watt to you for N in Gen eight kind of jumped up to almost 1600 or almost 1800 we wanted to do some power stress, which I have a bit of a story about that, but That they require us to upgrade our PSU from something that was rated at 1600 watts and, you know, we were not going to have, you know, a design.

A server that was going to do a basic counter watching and then let ourselves be 1600 watts PSU.

So the upgrade was we want to go to power supplies at 2200 watts and that required to have from C 13 to C 19 which is much more Less on less usual more expensive possible.

I think, right. It was harder to source.

If you can correct me on this one to even the cables themselves were actually harder to source and the PD use also didn't have as many of those outlets or at least there was no, there was a smaller market availability of those Also, It's great.

On HP. So it means we send six servers to most locations until we had to add extra whips to get extra PDUs in there or, you know, some of the times you just have to get another rack to send more more hardware there to give us more capacity.

Was there more three slides here.

For my end, I think, you know, kind of also give kudos to the industry when realizing that there's these kinds of issues for, you know, cloud providers like ourselves.

You know, kudos to server tech and companies like Veritas slash Geist, you know, for doing the combined connectors to where you can do you know your 13 or 19 connector Into the same slot.

It really is a it's really a big game changer and kind of saves a lot of headaches for us because we got into these issues right where we go generational iterations between power profiles.

Are we, what are you saying we have cables that have one end of C 13 and as like C 19 No.

So, I mean, unless you pull it up. So PDU manufacturers like server tech and guys, they went from having just the traditional like C 13 you know connector to To essentially allowing you to plug in either the C 13 or 19 cable into the same outlet.

So you never have to worry about never had to worry about, do I need to retrofit a new PDU because all you would have to do is just either remove an adapter piece and plug in the new cable.

Equates into less outlets overall, but, you know, given kind of the powerful prints in the data center.

It's your, your plug density is rarely an issue.

It's more along the lines of what do you have the exact outlet that you need.

Right, right. I guess I can only imagine with the trend that you mentioned earlier.

That PDUs are going to be more moving forward to have like more C 19 outlets, then Yeah, and I think part of the I think realizing that trend to a lot of, you know, server tech guys there to Realize that trend and they move to this kind of universal connector for that purpose.

So you can connect higher amperage devices and still, you know, operate safely.

Oh, okay. I'm just trying to move on to the slides that we have here.

Let's go ahead and back to Share those things here.

I think the next one is where we hand off to Chris and I stopped talking now.

Yeah. Yeah.

Hello. So we We made our own dashboard. I know a lot of companies make the power dashboard and and companies will invite third party, but we made our own To monitor our own power at every single PDU.

As you can see, this is a rack. This rack has four PDUs.

You can see the phases of each load. You can see down the bottom right hand side, the amount of power pulling out of each breaker, each branch.

The average, the max and the current levels and then at the top, you can see the peaks in amps, kilowatt hours.

And the large usage in kilowatts and the peak at kilowatts and that helps us overall plan for capacity and allows us to see usage on, you know, when co-location have Power maintenance is and, you know, they're going to lose primary feed or redundant feed and we can see how the other PDU will Pick up the load and make sure it's all balanced.

As you can see, we spend a lot of time on power and trying to get things balanced pretty well.

You know, 0.5 amp. I think this is mostly out of this whole rack is very well balanced.

So we take a lot of time and a lot of research that has been for the years of learning the power draw of all the devices and servers that we deploy in these locations.

This dashboard is still in its working stages.

I've created it. It will be handed off to SRE and then fix all the bugs and to get it working 110%.

This is, this is pretty good in helping us learn and develop across the globe that we're in.

So, yeah.

Chris. So, you know, my understanding is a lot of actually companies rely on things like DCIM and, you know, which kind of limits what you can You know the data extraction as well as data visualization, you can do.

So I think, you know, if anyone watching is actually interested in kind of deep diving on how we collect these metrics and you know what our kind of What our data collection methods look like, you know, please comment and let us know.

And then we can do a follow up because I think this is extremely important to a lot of maybe like co-location providers small Small shops, you know, they don't want to spend a bunch of money on some kind of DCIM solution, but they want to, you know, get actual quality data so they can, you know, make growth plans themselves.

And it's also good to understand when you're paying for a smart hands to remotely install your equipment that they're not just Overloading one phase or just one breaker or two breakers.

We can actually see here the branches. So I've heard you can have many different branches and each branch is generically on a 20 hour breaker.

So you get to see the load.

So it all depends how your power deployment is. So you can have everything fed off your primary feed.

So when primary feed fails, the redundant feed will pick up the whole load.

We tend to break it over to 50% over primary and redundancy.

And so we do a 50% so we run our breakers get no more than than 10 amps, which is otherwise when we fell over onto the other PDU or trip the breaker and we lose that that many devices on each on each branch.

And that's normally around about six devices that you could lose sometimes 10.

So that's a lot of capacity and a lot of devices that losing just because we didn't power monitor understand the draw on that PDU.

So we always double check every time we do a deployment remotely.

It's installed in the correct phase and the correct socket. So it matches our database.

Yeah, I think that's actually really cool. I can when I think of something like this.

Yeah, I do think of DCIM solutions, you know, things that we can like look outside and just have this like implemented into a whole entire fleet.

And, you know, my first thought would be, well, how could this work with You know, 200 plus colors that we have, right.

Is this something scalable. So what we're looking here is one rack.

Is that correct, like this one rack. This is just one rack.

Yeah. Okay. And yeah, and I think it does, it does deserve maybe I can add it to talk more and deep dive into like how we built this tool that you guys could give us a little bit of a Like a summary of how we built this thing.

Like, how do we make this possible that, you know, it's something that's homegrown to us.

So we use There's a guy in SRE called Dan. I've forgotten his last name. He's based in Singapore.

So he pulled out the metrics and really went into what a server tech PDUs can do And we managed to find out that we can pull out all this information from the breakers.

So we use Prometheus to put the data in And it all has an IPv6 address in the PDU.

And then it comes into this dashboard. And then we do the maths behind the dashboard to sometimes, you know, convert it because obviously 208 volt and 230 volt are different in maths because of different power topology.

And that's how we get the result of we can dive much, much deeper than that.

But I probably brought a lot of people are going to all the depth of how that's done and the matrix behind it and and the coding and everything we've done to make this work.

Yeah, it's, I mean, I see this more as a as a monitoring.

So as in, well, everything is has to look as the way we expect it.

And if it's not, then at least we have this, you know, this tool.

Maybe, maybe even the luxury that we can look at, you know, maybe a breaker went down or Maybe a power supply went off for one of our servers that we can see this happening like a certain time and You know how You know, how else did you use this.

So when we try to look for like a like a rack design.

Right. So how can we figure, hey, this we have we have we have too many servers in that rack.

Maybe, maybe we should provision another rack or Yeah, so we basically brought one out by the whip and then we have a design that we work off depending on the power and the delivery from the colon.

So it's a free phase 32 amps, then, you know, we'll spread that load over the over the phases.

If it's a single phase, it's very easy to overload Iraq, because it has more ports, it still has 30 ports.

Okay, the free phase. PDU has so people can then mistake it, you know, the PDUs in the racks and accidentally rack it in the wrong rack and plug it in because it's still going to get power because it's 50% right we don't operate 100% of each each PDU.

And then when we go to deploy some more servers will we always check what's good in that rack and look at the go on the dashboard and look at the power draw and then figure out there is a problem.

There's not a problem.

So this will tell us, you know, we know the model of the of the rack. Database.

And so we already know the limit. So this one that you're looking at is a 16 amp free phase PDU So we know we've got plenty more power there.

And then we'll go there and work out the maps, see how much power so you know devices and what the what generation we're going to send there and go from there.

Oh, this is I would also say that, you know, from kind of engineering perspective, right, this is an iterative improvement in the sense that In order to, you know, have a solid understanding of your rack designs and how do you change from location to location because traffic profiles are different.

This tool really feeds into like a constant improvement or kind of Kaizen -esque mentality of like What is the expected power draw and how do we optimize kind of efficiencies between branches and phases and things like that.

And as well as also this, you know, tools like this are extremely important to like our operations teams.

When they look at like colo expansion planning as as well as even just auditing colocations and a lot of I think companies.

Cloud companies might take at face value the report that they get from a colo provider and say this is correct.

Just make the payment, but, you know, And not having the ability to monitor or audit themselves.

And, you know, through having tools like this.

We've actually been able to audit and say like, you know, actually, this was not correct.

Why is this bill like this. So it's, it's, you know, engineering is very important, but also kind of operational and how you manage the growth of your company is extremely important as well.

Yeah, I could see that there's going to be some sort of decision that we have to make.

Let's say we're going to expand Our colo, the demands in that colo is going to be much higher and we need to fulfill that demand.

So like looking back at our example then like we can look at, you know, how much there's overhead left, right.

We can decide if we need to have more more racks or maybe maybe upgrade the PDU.

Is this a thing that we can do Yeah, so that also depends on the call in the data center.

So you can power anything, right, but you need to make sure you can call it as well.

So this is where we come into talks with the colo. Normally when we, before we sign a contract, we already know the amount of the rack we've had in call.

So some cases it's, you know, seven kilowatts of rack and that's what they can call because they've got to think about the other customers.

Sometimes it's 18 kilowatts and sometimes it's 25.

So it all depends. And then we have all different call in.

So we have cold oil containment that we 90% deploy around every colo that we're in.

You have a full containment or you have a bath. Now bath is exactly the same as a full housing or full cold oil containment, but it has no ceiling or roof.

It just acts like a bath in your home where it keeps the cool air in all around the server and lets the hot air just drift up to the top and go out.

And that's all down to the colo design of the call in topology and how that works within that location.

But sometimes we hit some circumstances where the data center cannot upgrade.

So, and they can't give us three phase for the next rack. It has to be single phase.

So sometimes we just have to work with them. And sometimes again, or we have to put four PDUs in one rack to make it more dense and to get where we need to be.

Right. So it's sort of your preconditions that you can't ask the colo to do.

We can't just say, can you just make it cooler or something. Yeah. So we obviously have limits and they have a limit to a room.

There's one location, you know, we have their call-ins are 120 kilowatt capacity per crack unit, which is a 34 RTs, radiational tons.

So we know our calling limits and we can get three of them in a hall.

So that's 360 kilowatts. And, you know, 102 radiational cooling tons that we can have.

So it all depends on us and how we want that power delivered and the rectancy and how we see our future grow.

And that's how we make our decisions.

We try to work with the colo to understand our future goal, as well as their limitations.

So we don't block ourselves in the future. I have a question for you, Chris.

Yeah. How does Cloudflare handle DC power? Because I know that exists out there.

Yeah. So we have a few sites of DC power now. I think maybe 15, correct me if I'm wrong, maybe 15 sites.

So we use a DC to AC inverter. And the one we currently use is as free supplies as free 63 amp supplies to it.

It gives us a 4.5 kilowatt or roughly a 20 amp limit that we we can use.

That also has a smart function where we can add that onto the dashboard and see a live inverter and see the power of it.

So we still learn with DC as a company and to work with the co -locations, because it's not the normal.

It takes a lot of effort. Some locations will request us to work out the length of the cable and the size of the cable for them.

So we get the actual draw. DC is the longer the length, the more current you need through and the bigger the cable to get you the actual power delivery at the cage.

If it's a foot away, you need a smaller cable. If it's 20 foot away, you'll need a bigger cable.

So we work closely with them to understand the limitation.

Some of our network has DC power supplies so we can hardware straight into the breakers of the co-location PDU.

But yeah, I'm interested to talk to companies if they have other inverters out there and technologies they want to share with us that are rack mounted.

What region do you primarily see DC power in when you're doing site planning?

Brazil is the biggest one at the moment.

Brazil is quite a difficult one. It all depends on the company as well and whereabouts and what we're doing with that data center or that co-location depending on the power.

I'm trying to understand a little bit.

Does it look like we're going to run into a situation where we have a mixture in a rack of DC and AC then?

Yeah. And we do have that sometimes because we could be the last customer in that co-location.

So we have to take what the co-location can give us.

We're there for different reasons for connectivity, trying to get closer to our customers.

Obviously price is always another one.

So for this season, how we rate them. So sometimes we have a mixture. We have sometimes a rack of DC power and next to it we have a rack of AC power.

We try not to mix the two powers into the same rack to manage it, manage what server is plugged into what and make sure that everything is done the way we requested it to done.

As many companies out there, you can request something and sometimes it doesn't get delivered the same way.

But again, we're still learning. We're still developing.

I would say from a sourcing perspective and then working with the hardware engineering, network engineering groups, we've made some strategic selections around having equipment that supports DC power.

So like having DC capable power supplies.

So if you think of our current generation systems, the Gen X, it's a smaller footprint box consumed less power overall, which allows us to actually swap out AC, DC power supplies.

Whereas if you look at the previous Gen, Gen 9, the power consumption was too high for the current market availability of a DC capable power supply.

So we've made some decisions to, you know, kind of design out the need of inverters, but we still have a lot of this kind of legacy equipment that's out there that still requires them.

So older legacy fixed placement networking equipment, which doesn't have removable power supply.

Kind of, you know, very similar, similar ESCA devices out there that we need to, you know, slowly go through and pull them out and retrofit them so we can get rid of things like inverters hopefully one day.

Oh dang. It's actually new to me. I had no idea that we actually use DC.

I always thought we were just all AC and to think that we were probably mixing it in the same rack, I don't know.

You guys already explained on this one, but I guess it is interesting if we have one rack, that's an AC and that's the same color.

So I assume the rack is sitting right next to the AC rack that we have a DC rack.

Does the color provide the inverter or is this on us to find out the inverter?

We have the source and body inverter and we've had cases where it gets shipped damaged just because of how heavy it is and the size of it.

And they can be quite hard to resource again.

And there's a lot of lead times, correct me if I'm wrong, Stephen, to get these.

And they cost a lot of money. So we don't want to keep massive amount in stock because of how much they cost.

But we now have a new system and a new way we make sure that we have enough and we can then grow.

The model we use, you can add a module and basically each module is around about one kilowatt and it has four modules.

So we can send it out there with two modules or one module to get us going.

And then as we deploy and grow, we can then send more service and send another module to keep us going.

So then we don't have the problem and we don't have to send everything at once.

We can then just grow with the same inverter.

And I guess, how do we get ourselves into that kind of situation, right?

I mean, it would be hard for me to just remember that this rack is a DC rack or like, why did it have to be DC if the other racks are AC in the same color?

Is this like a flexibility thing or is this like a something that the color just said, no, we're going to, we're just going to have to go for DC or?

Sometimes it's just what they got left. Yeah.

So we can be the last customer in that color, right? You know, two, three racks left.

They've got enough of one, one, you know, AC rack and the rest has to be DC.

And we don't want to have to move because sometimes it's really important for us to be close to our customers, given the privacy and the connectivity that we strive for as a company.

So, you know, we've learned to develop and to work with the co-locations or to make that happen for us.

And then for our customers, we don't want to, you know, we don't have to not be somewhere because we can figure out a DC power.

I would, I would say if you, when you look at Cloudflare, like we're one of the most interconnected Internet exchange populated companies on the planet right now.

So, you know, putting equipment that's, you know, 60, 80, a hundred miles away is not, you know, an adequate service for our customers.

So whatever, whatever kind of accommodations we can make to the colo to further expand for whatever is available at the time is essentially the mentality that we take, right.

Because we want to make sure that the customers are getting a constant kind of like a SLA driven kind of performance out of what they expect when they sign up for Cloudflare.

Yeah. Yeah. I mean, that goes well in line with our philosophy with hardware, right.

We're not tied to a certain technology.

So it does make me smile that we can actually do that.

That's, that's super cool. And I assume that that also extends to, to network gear too, if anything, maybe more, especially network gear that has, that has DC.

So is there something like in, in your world, Steve, that you have to ensure that the SKUs are going to be correct or, you know, that you have the correct cables and like, is, is DC or AC when you do rack design any different?

It's going to look different in regards to what your power delivery unit looks like.

It's not a traditional PDU in the sense because your DC is technically, or typically, I should say, you know, two connectors, right?

You're positive and negative, and it's going into a shelf or a box.

And then that box is essentially, you know, the power supply.

So it's going to look different. It just looks like a really large horizontal PDU.

And then you put your connectors into it. So it's not the traditional, traditional kind of, you know, plug a cable into something, because that's just not how the terminations work on DC power.

So physically, it looks different.

From a monitoring perspective, you know, as Chris mentioned, they're, they're network connected.

We can monitor them on the network, just like any other PDU.

And then we can make modifications as needed. I would say, though, that the, the baseline requirements for working with AC versus DC power is significantly different in the sense that you run a higher risk of, you know, fires, accidental accidents, as well as, you know, possibly leading up to death if you improperly handle DC power, whereas AC power, plug in a cable and just leave it for the most part.

That's something that Cloudflare's had to do a lot of planning around to ensure that when we're deploying this stuff out in the wild that whatever system we're using, it's completely safe and, you know, only trained personnel are kind of installing it at the site.

So we take a little extra time to say, does your site understand how this works?

And do you have the proper equipment and tools in place to, you know, install this equipment for us?

Yeah, yeah, I'm still, I'm still trying to wrap my round as I'm just learning this stuff.

Because real voltage is going to be different, right?

Yeah, so it's normally 48 volt. And that's where you have the big amperage.

So it's 60 amps, 63 amps. That's where you have free, free supplies is to convert the power then back into AC, which will be converted to 230 volts.

So it's a, it's a big difference.

It's a big difference. And I've seen some of the connectors that they're hardwired in and into the inverter.

So and as the inverter is a 2U inverter.

So we always make sure there's a U gap on top of it. So the cable's got enough bend radius and server doesn't get involved or a technician doesn't scrape his arm in it if he's then got, you know, power recycle a server, we try to think of all these things when we're deploying and doing a rack elevation.

For DC rack system for the safety mount and for the network outside, we don't want to see an outage because we are some to lean in and they not maybe for some reason the inverters, power cables are loose, not the inverter, and it brings down the whole rack.

We don't we don't want that. So we take precaution and leave extra gaps for this.

Okay, I mean, I guess it could be a nice little segue to talk about redundancy.

So I can only, I can't, I can't assume that every color sort of advice to a certain standard of power.

When you have 200 colors, some of these goals are like way out in the corners of the world where standard is probably not going to be like your, some of the bigger ones that we have, you know, in Europe or, or, you know, developed countries like somewhere in Asia, if you're an American, so do we have policies when we select colors, or, you know, how are we going to power them?

So let's say this color just doesn't work for us. Like, well, what are those checkmarks?

So yeah, that's our TLDR on this one. Always audit your colocations provider single line diagram never trust what they tell you at face value.

Too many times will they tell you something's redundant, but it has a single point of failure in the chain.

We spot is a few. I do that as part of my job. So when we're looking at a new location or new contracts up for renewal, or ask for single lines of the power topology, the calling topology, and we'll go through the maintenance locks.

And then, you know, some days, and as we'll do n plus one to the main switch room, and then off the main switch room to PDUs and to rack and cage and hall levels will be a single point.

But be off the, you know, you'll have two feeds of their same UPS or the same board they're feeding from to your rack.

So your rack technically has two feeds, but their back end UPS goes down, you lose then both your power supplies.

Yeah. We now do the, you know, one of my jobs is to do the power topology and read and look into all the data centers topology before we sign a contract and make sure that we're happy with how it's rated.

And it's also depends on the certificate into what tier.

That's a huge thing if they spent the money and got a tier rating, it normally then secures that what they're giving us is right, but we like to be double triple check that before we sign a contract that that is the case and we're paying for what we're getting and not going on, we're going to have a power outage, you're going to lose all your power feed to your current cage.

Sorry. Yeah, yeah, I don't think it's something that we can just simulate to right, we can just have this on a graph on a graph and says, that is gonna fail.

I mean, do we do we have power, like, can we push back on a code like, hey, we can't set up shop here.

Unless you do these things, or it's just sort of legacy sites and depend on what we've got in a contract.

Sometimes we work with them and co-owners are very interested.

And I want to, you know, change and develop and understand why there was wrong, and sometimes they didn't have the right staff at the time that understood the power topology because sometimes other companies will buy a legacy data center.

So, you know, they're left with what they got. So then we help them, or we give them advice, we give them our opinion.

And sometimes they nine times out of 10, they do change, it's not a five minute change, it takes a long time.

But working with them and understanding that are willing to change and are spending the money to make it say n plus one or n plus two, more redundancy or have a better BMS system, have a better portal for us is a great thing for us to do and especially work with co-locations.

Okay, okay. So we got 10 minutes left. We have a couple of questions that we can go back and answer, but I do want to make sure that we actually also go through our slides here.

So I'll switch it back to sharing the screen.

And, you know, there's something that you can help me walk through here.

So this is a screenshot of our graph. And What are we looking at here.

This is one of our PDUs. This is a 50 amp PDU, two eight volt, three phase.

But it's actually restricted by the breaker so that becomes a 40 amp three phase PDU.

So each phase or each line to line has two breakers at 20 amps.

And we use around about, I think it's like 32 ports in this PDU, I think it has 46 ports.

And yeah, this is one of our highest densities in the US is we draw around about 14.3 kilowatts and we can draw 14.4 with this PDU.

Chris question for you.

Why is it important to monitor the CB on each of these for three phase line to line configurations.

What do you mean the CB? The breakers. Yeah. So, again, back to the point where we actually do both PDUs are primary.

So in an event that we lose a PDU or the co-location has a power maintenance where we drop to one feed, we make sure it is balanced.

So we want to make sure that that breaker can handle the load once it fails over.

So again, we don't go more than 10 amps per breaker per branch on these PDUs.

So you may have phase one at drawing, you know, five kilowatts and phase ones are drawing one kilowatt because it's not balanced, but you'll sometimes get build on the heights phase.

Yeah, it's very efficient and good for us to save some money just to make sure that we balance PDUs.

Yeah, it's, it's kind of like having a bottleneck is what I see is that you have a whole system and then you're only as slow or as inefficient as the most inefficient part of your system.

And so you have three phases that you're handling.

And while they all could like You know, satisfy to power all the hardware, it sucks when you don't have to use as much power as you need to.

That looks like a phase.

Yeah, that looks like a three phase sine wave. Is there much to talk about on this one?

No, it is what it is. It is what it is. We can't bend nature.

Nah. Yeah, I included this one from the Uptime Institute from their 2018 survey because it kind of, you know, when you, when you look at kind of redundancy as a whole, you realize that power.

Power and network are the, you know, the biggest proponents to downtime for the most part, right.

So 33% You know, power related and 30% network failures, which also can be power or single point of failure related.

And so I just, I included that in there for reference to on distress, the importance of the topic.

Yeah, yeah, for sure. And when we talk about outage, at least for us in hardware outages is a power outage.

Yeah, there could be many kind of outage like the ones we had last week.

But for us, it's, it's one of those things where for us, it's really important.

So, like I mentioned, like, in the very beginning.

You know, there's, there's no conflict about hardware and there is no conflict about power.

It's, it's one of those things that all of us at Cloudflare and software engineers, anybody else.

They have to trust that our machines are on Essentially, like, you know, it cannot be part of our root cause to say, what's the outage here because there's different layers of where the object should be.

And for us, you know, we're basically like a very, very low level, right, not just doing the physical level, but those physical things just don't even do anything about any kind of power so Like we try to not be, you know, part of that cause, you know, I've had some war stories where I've accidentally shut off a whole rack before And, you know, it's just not something that we want to make a thing really it's supposed to be an assumption that we have a power taken care of the Iraq design are, you know, in such a way that we don't go overboard either.

Exactly.

So yeah, we, we can almost wrap it up here. So there's a couple of questions that we can go through here.

I think we've hit them up that we we can clarify a little bit more.

So for you that extra answer, you know, how deep are racks.

I think I said 42 inches. I think they range typically between 700 some odd millimeters up to 1200 millimeters.

So when we looked at the previous generation systems that we used a lot of 1080 to 1200 millimeter racks.

And those are just standard racks of the colors will provide us right we don't source racks.

Yeah, typically we don't source racks.

Typically we we abide by what the color has in place, just because if they have a cooling configuration or whatnot.

You know, it needs to be specific to their publication.

And I see this as more of a standard size. Is there a variety of them when you look at a color like maybe the color is not able to provide you You know that dimension.

I can only assume. Yeah, we have a gen nine.

That's really, really deep, but we just can't even reckon stack them at all.

We can't do that. It's normally the width, the normally smallest one we have is like 600 wide Which is, is, is about 23 inches.

Which is not very wide, you know, for cabling and and and that, but It fits the server.

So it all varies on the location and the specifications, but there's many different sizes and specs of racks around the world.

There's There's not a universe standard. This is it because every, every company and every color has his own specific needs.

Okay. And I think that goes for all the questions that I have. So if there's anything.

Any other questions that you want to ask. Please go ahead. So for me, I'll start.

Is there anything that you kind of wish to see in your projects are implemented a call flare.

I'm thinking of something like an ATS maybe. Is that a good idea.

Should we push to do more DC or like have a bigger markup of DC. What do you guys think We're trying to leave ATSs alone because it's, it's a single point of failure.

Every single hardware we buy is we try to buy dual fed But ATSs are great until they fail.

That they work fine. They switch over from A to B and they do what they need to.

But as soon as that device has failed. You've lost both A and B feet. Right, right.

It becomes a bigger problem then Then actually putting one in. It becomes more of a hindered and then we know when it's failed.

Right. And You do a high life recycle of X amount of years and you replenish it and there's nothing wrong with it.

And you're just doing it just for warranty. You can always get that one bang one out of a factory that doesn't last that long.

So to us is we, I don't think we would make we move forward.

Right. I would say previous previous employments, you know, ATSs were kind of a big, a big proponent.

But when we look at kind of the Cloudflare Architecture, it adds just a level of complexity that can be resolved with dual fed and then doing due diligence with your, your power grid reviews of single lines.

Okay. All right. I think that's, that's all I have. If you guys have any questions.

If you guys have anything else. We've got two minutes. Yeah, we got two minutes.

I think we're, we're pretty much ready. We're ready to close this off.

So for everybody, the audience that are here. Thanks for tuning in.

Hardware Cloudflare episode three. It's me, Rob Dinn, Steve, and Chris. It was a pleasure.

And now maybe we can do this another time and definitely would love to hear more about our homegrown DCIM tool.

All right. Okay. Mic off. Thank you.

Thumbnail image for video "Hardware at Cloudflare"

Hardware at Cloudflare
What does Cloudflare look for in hardware?
Watch more episodes