Cloudflare TV

🎂 Pradeep Sindhu & Nitin Rao Fireside Chat

Presented by Nitin Rao, Pradeep Sindhu
Originally aired on 

2020 marks Cloudflare’s 10th birthday. To celebrate this milestone, we are hosting a series of fireside chats with business and industry leaders all week long.

In this Cloudflare TV segment, Nitin Rao will host a fireside chat with Pradeep Sindhu, Founder & Chief Scientist at Juniper Networks, and Founder & CEO at Fungible.

English
Birthday Week
Fireside Chat

Transcript (Beta)

Welcome, everyone. We're celebrating birthday week here at Cloudflare. It's been 10 years since the launch of Cloudflare, helping build a better Internet.

And what better way to celebrate than speaking with industry leaders we respect, talking about the last 10 years and the next 10 years on the Internet.

I'm really honored to have as my guest today Pradeep Sindhu, who is the founder of Juniper Networks and currently the founder and CEO of Fungible.

And so it'll be really exciting to chat with him about his entrepreneurial journey, speaking about trends in computing, and just ideas and advice he might have for engineers.

We have only 30 minutes, and so we're going to dive in and see where the conversation goes.

Thank you so much for making the time, Pradeep. I know I joined so many engineers listening on the call who really appreciate your making the time.

Thank you, Nitin.

And thank you to Cloudflare and yourself for inviting me. It's an honor.

So maybe to start off, would you mind where... So let's maybe start off by just setting the stage and talking about the early days of Juniper Networks.

And so would you mind sort of comparing and contrasting the company you're building today and founding Juniper?

So what's similar and what's different? So Juniper is now used to power just a substantial portion of the Internet.

And so thank you for your work there.

How is being an entrepreneur today different from founding Juniper?

Well, as many of your listeners may know, Juniper was founded in 1996. And this was the time at which the Internet was kind of in trouble.

In fact, a lot of prognosticators said, this is a university, IP is a university thing, it's going to die.

And then other technologies like ATM and perhaps just Ethernet, layer to Ethernet would take over.

And the insight that I had, if there was an insight, is that for building networks, scale matters.

And the critical problem that you could notice back then is that bandwidth was growing really, really, fast, the demand for bandwidth.

And the raw supply in terms of optical fiber was also growing very, very fast.

But the key technology that was needed to convert this raw bandwidth into usable any to any bandwidth, the technology was IP routing.

And that was using general purpose processors.

And so the bandwidth was growing at probably two to three x every six months.

I don't know if people remember those times, but it was going extremely rapidly by any metric.

And that's very fast.

That is very fast. And networks are melting down left, right and center, which is what caused people to say that this Internet thing is not going to go anywhere.

And the general purpose compute, general purpose processors that were used to build routers, to do everything inside routers, that technology was improving in performance at about a doubling every 18 months, so called Moore's law.

Well, you can see the gap, right? There's a gap of a factor of three in the exponent.

And that's never a good idea. And so what Juniper is credited with is coming up with an architecture where the division of labor between what happens in software and a general purpose CPU and in software that is running on silicon, which is a special purpose designed for building routers at very, very high speeds.

And, you know, today, nobody questions that architecture. Back then, I was called an idiot for actually talking about that.

But, you know, looking back, I think Juniper is properly credited with saving the Internet.

At least this is Vinod Khosla's view at Atlanta Perkins.

And, you know, if we did indeed do that, it's humbling.

But at the very least, I can say that we have contributed to the development of the Internet.

That's great. Sorry, go ahead. So I'm now going to contrast, right?

So that was 1996. This is 2020. You know, it's almost a quarter century later.

The situation is similar and it's different. Situation is similar in the sense that we are out of gas with general purpose computing.

I don't know if this is generally understood.

But Moore's law, especially as it applies to general purpose computing, is growing very, very slowly, perhaps maybe 10, 15% per year.

It's nowhere near the 2x every 18 months.

And for the average person who is maybe a little less familiar with these terms, if you're building your own computer, you go to Intel.com, you buy an Intel processor, that's general purpose computing.

That is general purpose computing.

It is, you know, the idea of general purpose computing, by the way, is now literally 75 years old.

It was put on paper by John von Neumann, who was a famous mathematician of the previous century.

And it was done in the context of a computer called EDVAC.

It was the first programmable, stored program computer.

Prior to that, people used to stitch computers together by hand and connect wires and so on and so forth.

And of course, the reason that this invention has revolutionized the world is because things become much more valuable, malleable, and you can do things much quicker.

You can write programs flexibly. And of course, there's an entire cadre of technologies that have been developed.

So the idea of general purpose computing is perhaps one of the most important ideas invented by mankind.

Okay, so I don't want to downplay this. The problem is that the engines that power this general purpose computing, they use a technology that is called CMOS, or complementary metal oxide silicon.

CMOS technology is also now some 63, 64 years old.

And it is the thing that is running out of gas. In other words, the improvements that were coming to us for free, not for free, there were a lot of people doing the hard work.

Well, those improvements are now slowing, that have been slowing down.

And over the next two generations of technology, they will be completely flat.

So the question that I asked myself, right, this is now five, seven years ago, because you could see these trends back then.

What is the world going to do?

Are we going to not want more general purpose computing, faster storage, etc?

And the answer was, of course, people are going to want it because there is no amount of compute power, storage, and any to any networking that we can supply, these things are infinitely useful, because they can be used for doing many, many, many things that we find desirable.

So it's on that faith that I launched fungible, along with my co founder, Bertrand Surly.

And this is to make a big, big improvement in the efficiency, the reliability, the security, and the economics, and also the performance, the raw performance of data centers, because data centers, you know, maybe a decade and a half ago, started to use general purpose computers in a in a in a manner that that is called scale out, which means lots and lots of general purpose servers, every one of them having Intel CPU, or an AMD CPU inside.

And this is the same thing that powers your PCs, slightly different processor, but but you get the idea.

And the idea was to put many, many of these things working together, like an army of ants to conquer a big problem as opposed to one big, any, so any individual machine can fail, but but but the whole pool keeps working.

In fact, that was the critical idea. And the critical idea behind scale out and microservices is that idea, which is that I have an assembly of worker bees, and any of these worker bees could go down, but the service still continues to run.

In other words, separation of service from servers. Well, that idea has played out.

In other words, once I do scale out, I cannot use it a second time. If you see what I mean, it's a joker that can be played once.

Okay. After that, the only thing you can do, at least by my reckoning, is either build more data centers at the edge, which people are doing.

But the other thing you can do is you can improve the architecture, you can recognize that general purpose computing has reached a kind of limit.

And you need to use special engines for frequently occurring computations.

You already see this, you've been seeing this for a while. GPUs, graphical processing units, are used to do vector floating point computations.

Okay.

FPGAs are used for doing specialized things that I need to do soon, but they're difficult to program.

Well, what fungible is working on is a deep technology. You know, these days, deep technologies are not fashionable.

What's is another million lines of PHP code or Go code running somewhere.

But, you know, let us recognize that all of the stuff that we use from a day-to-day basis actually depends on some fundamental underlying infrastructure technology.

So what fungible is trying to do is advance the state of the art in a very, very, very important place.

And we can improve the economics of data centers for enterprise data centers by an order of magnitude, 10x, and of hyper-scale data centers by a factor of 3x.

No questions about that.

That is the technology that we have. But we're not a silicon company. We're working on silicon systems and software.

Sort of an old-fashioned way to make money.

And so in some way, the analogy holds to when you were describing Juniper in the late 90s, because we were not improving productivity at the pace we would need to keep up with just growing, growing pace.

There's a very strong analogy, by the way, which is that recognizing that there is a technology bottleneck, and perhaps we have some insights to solve that bottleneck.

And then if that bottleneck is solved, then a whole bunch of nice things will happen.

And so if you're a manufacturer of general -purpose computing processors, does that change how you think about your business for the next 10 years, 15 years?

Are there segments where they continue to add a lot of value?

General-purpose computing is incredibly valuable.

It is incredibly valuable because of the programmability and agility that it provides.

What we're saying is that the DPU is not a replacement for GPUs.

And the data processing unit is the DPU. Yes, data processing unit is the DPU.

It is complementary to these other existing microprocessors.

General-purpose CPUs are designed for running general-purpose workloads.

GPUs are designed for running vector floating point heavy workloads. DPUs are designed for running data-heavy workloads, which is roughly one-third of the power consumption in data centers today.

So it's not a piddling little workload that you can ignore anymore.

And by the way, you should ask me, where is this workload running today?

Well, it's running in software inside the kernel, typically Linux, inside x86.

But this is widely now recognized that that is not the right place to run it.

And the general -purpose microprocessor architecture is not the right architecture to apply to this problem.

That is fundamental. You can take that to the bank.

Now, what are examples of data-intensive applications? So I'll give you two sets of examples.

One set of examples, the most important ones, are examples of infrastructure workloads, infrastructure computations.

These are below the subsurface.

Most people don't see them. This is the network stack inside the data center.

Everything is inside the data center. The network stack, the storage stack, the virtualization stack, and the security stack.

Those are four computations that today happen inside a general-purpose CPU, which are being done extraordinarily inefficiently.

And they're at some level, I don't want to be too cynical about it, but if you're an infrastructure-as-a-service company, that's the overhead.

It's absolutely overhead. In fact, this is again a very misunderstood point.

There is a reason that AWS bought Annapurna. Annapurna was a general-purpose ARM core-based chip.

And what Amazon did was, which is brilliant for them, and it worked really well, which is I could buy ARM cores cheaper than I could buy x86 cores.

So I arbitraged the difference, and life is good. There was no architectural improvement that at least I could detect.

Now, maybe there was some brilliance in there that I cannot see from the outside.

But what we've done is entirely different.

We've said, hey, here's this very, very important workload. I've given you four examples, which are in the infrastructure space.

Another example is analytics.

So generally, analytics and the training part of machine learning uses very, very large datasets, because machine learning works better when I have more relevant data than less relevant data.

It's the whole idea. Training is much more effective.

Well, when you do that, you have to shard the data across many, many machines.

And what you want to do there, whether it's analytics or machine learning, you want to do the computation next to the place where the data is kept.

You don't want to drag it 300 meters over to a computer that will then crunch on it, because that's too slow.

It's a bad way of doing it. That's how it's done today. So you want to do the computation next to where the data is stored.

And the DPU is exquisitely good at doing that.

The DPU is also exquisitely good at doing network tasks.

So if you look at the intersection of network and storage, that's one of the places where the DPU shines.

And there's nothing else which is there. So let's speak maybe for many more about ARM.

So on the face of it, ARM seems terrific. You get a lot more cores.

You use fewer watts per core. And though you can pass on, if you're running an infrastructure company, you get a great deal more efficiency from your processors.

I know you mentioned about the architecture not having changed, but isn't that more problematic?

No, Nitin, that argument was valid back in 2015.

It is not valid today, because competition for x86 compute cycles has caused the price per core to drop quite dramatically for x86.

At the same time, if somebody is building, quote unquote, a smart NIC, and all they're doing is taking ARM cores and plopping them down, I am sad to break the news to these guys that you're not going to see any performance gains of x86.

In fact, it'll be less effective than x86, because there's nothing in the ARM architecture that speaks to this problem.

Zero. So what we've done is we've taken, by the way, here people all go into the idea that, oh, if only I had some special three or four instructions that this problem would be solved better.

Sorry, this is not an instruction set problem. Any risk-like instruction set is okay.

The problem is not in the instruction set. The problem is how these cores interact with each other and how they interact with accelerators.

That's where the magic is.

But let, you know, Napoleon Bonaparte once said, when your enemies are in the middle of making a mistake, do not point it out.

Okay. Let them make the mistake.

And there's this herd of companies, which are galloping towards the precipice.

I'm not going to tell them that you're making a mistake. Please go right ahead.

I should probably point out the show is being live streamed. So they might be listening in.

That's fine. Look, it has taken us four and a half years of hard work to develop things.

Anyone who starts now is going to be four and a half years behind us.

So I welcome competition. So can we talk a little bit about just programmability?

I think one of the things that is sometimes a little scary is when you hear about FPGAs, for example, not everybody knows how to program in HDL.

Not everyone knows what HDL stands for. How do you make this accessible so that you can harness developers around the world?

This is a very, very important point, Nitin.

You know, programmability, like many, many things like flexibility and so on, is a big buzzword.

Sometimes people understand programmability to mean, oh, I'm writing, I have x86 binaries.

Programmability has nothing to do with x86 or binaries or anything like that.

It has to do with the fact that I can very quickly and conveniently write programs, preferably in a high level language.

You know, C is a systems programming language. C++ is a systems programming language.

Because I'm talking about systems, that level I'm talking about.

Or if you're writing applications, you'll even use interpreted languages like Python.

So, you use the right abstraction for programming the machine.

And you use the thing that gives you the most productivity as a programmer. If you look at FPGAs, FPGAs are very difficult to program for the average programmer because they have very little to do with the kinds of things that most programmers are used to.

You know, the lowest level programming language most programmers today know is C.

And you're lucky if they know C. And you can't write C code and get good results out of an FPGA.

It just doesn't work. We don't have that good compiler technology.

So, people spend an inordinate amount of time optimizing FPGAs.

The other problem with FPGAs is that FPGAs natively start a factor of 10 behind everything else.

The clock speed on FPGAs is typically much, much slower.

And it's slower simply because the flexibility that is there in FPGAs comes from the internal network that connects the computational units.

And because of that flexibility, you have to pay a price in terms of clock speed.

I'm talking factor of 10, maybe more.

And so, you end up way behind. And you gain a little bit because I can directly program the hardware, but nowhere near tailoring the device to the problem at hand.

Now, I will point out that you use the term programmability.

Programmability is really about how quickly can I solve a problem given a human being?

How quickly can I write code, correct code, and get it to work for a new problem given the same piece of hardware?

That's programmability. And there's many ways to achieve it.

The other piece is performance. How much work can I get done per second once I've programmed it?

Well, surprise, surprise, you need both.

And mother nature being what it is, there is a trade-off between those two.

If I want to get very flexible, I'm not going to go that fast. And if I'm trying to go very fast, it ends up being not very programmable.

So, this is why people think when you're giving them highly programmable things, the flexibility is going to be low.

The only way, Nitin, to break this trade-off is to innovate on architecture, computer architecture.

Computer doesn't mean my PC sitting on my desktop.

I'm talking about computer architecture conceived widely. So, if you innovate in architecture, like Juniper did some 24 years ago, we actually got a 20x price performance improvement, okay?

This time around, it's a mere 10x to 15x.

It's still huge. In high technology today, getting an advantage of over 10x is, well, nigh impossible.

And so, what I'm saying is that we have flexibility in the GPU, programmability, if you want, which is close to the flexibility of a general-purpose CPU, close to that, not exactly equal to, very close, with about 10 to 15 times the performance.

And so, I'd love to get your advice for, I guess, two types of engineers.

First, if we go back to 1996, what did, and not everyone's working at Juniper Networks, but so many companies are on a path to go become customers of Juniper Networks.

What advice would you give an engineer in 1996? What advice would you give an engineer listening in, I guess, 2020?

You know, there's one piece of advice that I would give people, which would be exactly the same back then and today, because in both cases, it's worked for me, and it makes sense.

That advice is, base your thinking on fundamentals. You won't go wrong. And listen to where the market is going.

And you try to intersect those two pieces, because the best products, innovative products, breakthrough products, are typically found at the intersection of those two domains.

It's some insight into how things work, at whichever level you're innovating, okay?

There's innovation to be done in marketing.

There's innovation to be done in sales. There's innovation in business models.

There's innovation to be done in technology at the application level, at the infrastructure level, at the operating system level.

So I'm not saying that the only place to innovate is the place we're doing it.

But what I am saying is that we are now, the industry is at a point where we have been grinding away optimizing software for a long time.

There's not a lot of gas left there, at least as practiced in hyperscalers.

In enterprises, you can improve the stack by getting rid of layers and so on.

But there's not a lot of gas left. So what are you going to do as a result?

That's the question. And so what I'm saying is that this piece of advice, which I think is sensible, is look widely.

Don't be religious about particular technologies.

It's not about whether it's silicon or software or systems or this and that, the other.

Set out to solve a problem that actually matters to somebody.

That's the bottom line. And to do that, you have to intersect those two things, technology on the one side and market needs on the other.

And timing, by the way, is extremely important. If your timing is wrong, having brilliant people, having a brilliant idea, and even having money may not help.

So we have only a couple of minutes left. I'd love to maybe just really briefly, can you talk about your, so building Fungible, this has, of course, been a unique year.

How has your day-to -day changed? And how do you spend your time as you're building out your team in this company?

Well, the day-to-day, if you're asking me to compare how it compared to maybe 2014, 25 years ago, it's radically different because this year has been a very difficult year for everybody.

Here I am stuck at home.

And like most people, Santa Clara County still has orders where you cannot go to work.

And of course, we abide by those. And you try to make the best progress that you can under the circumstances.

Companies, by the way, prospective clients of ours, they're all clutching their money bags, because they also, there is a degree of uncertainty, and we should all acknowledge that.

I'm actually optimistic that this thing will be behind us.

But the only optimism I have is there are many, many scientists and doctors and engineers working day and night to actually try to get us a solution to the problem of COVID.

And that's the only ray of sunshine. I think if you look elsewhere, there's a lot of obfuscation, there's a lot of difficulty, there's a lot of negativity, there's a lot of polarization.

So in my view, there's not a lot to be happy about in 2020, aside from those things.

But important problems still remain.

And if people don't, even during difficult times, apply themselves to those important problems, well, who's going to do?

What are you most excited about the next 10 years of the Internet?

Oh, boy, I wish I could tell you, Nitin.

You know, one of the biggest, having had a role to play in the development of the Internet through Juniper, one of my biggest disappointments is that bad news travels much faster than good news.

I had hoped and expected that this would be a tool for disseminating knowledge for people to learn much faster and so on.

Well, look where we are. So my biggest hope is that this magnificent tool that we have, which is extraordinarily powerful, and I hope the people who are listening to me who are in a position to perhaps make a difference, will be used to propagate things that are true and not things that are false.

Unfortunately, the imperative to make money is more important than that, but it may end up destroying civilization.

And this is not an outcome that any of us want. So I'm optimistic that people will eventually do the right thing.

So there is so much potential here. And so I remain optimistic.

I always look at the brighter side of human nature, not the darker side.

But you know, sometimes you do get disappointed by events that happen.

But my hope is, not only for the Internet, but for the entire information technology industry, that it unleashes the potential of human beings.

That's wonderful.

Well, thank you so much for joining us, Pradeep. It's been a real honor, and thank you for being an inspiration to so many of us.

Really appreciate it.

You're welcome.