Mastering Velocity and Scalability to Serve the World's Largest Organizations
Best of: Internet Summit
Session 1 - 2015 Mastering Velocity and Scalability to Serve the World's Largest Organizations
- John Shewchuk - CVP & Technical Fellow, Microsoft
- Moderated by: Michelle Zatlyn, Co-founder and COO, Cloudflare
Session 2 - 2016 The Fastest and Most Secure Internet is Closer Than You Think
- Eric Rescorla - Mozilla Fellow
- Jana Iyengar - Software Engineer, Google QUIC
- Moderated by: John Graham-Cumming, CTO, Cloudflare
Okay, so John and I stand between you and this beautiful evening. So we are wrapping up today's awesome Internet Summit.
So I'm very honored to be sitting here. For those who don't know, this is John Shuchuk.
He told me that, think about it as throwing a shoe at somebody.
Chuck, so that actually really helped. Who, if you go to his LinkedIn profile, probably has the best LinkedIn profile I've ever seen.
It actually inspired me to update mine.
And the very first sentence is, I'm lucky to have what I think is the coolest job on the planet.
I mean, how cool is that? Well, in the software industry.
Okay, okay, now he's adding, he's, he's, he's, exactly.
What is that, what do you do at Microsoft? I mean, I have, what is your coolest job on the planet?
So Microsoft, I don't know how I landed this gig, but they gave me the job of leading a team of about over 600 software engineers whose only job it is to go out and partner with people all across the planet and do really cool stuff.
And it's this kind of bizarre pay it forward model. We don't charge anything to do this.
We work on any crazy kind of technology and the hope is that in the process of doing these things and solving those hard problems, it's going to ultimately benefit Microsoft.
Wow, that is pretty cool. And so give us some examples of some of these companies you've partnered with and some of the sorts of things you get to work on.
Well, so the kind of the really big scale, folks like General Electric have these massive industrial Internet systems.
Siemens, Rockwell, they have oil rigs and they're situated all over the world.
We help them aggregate that data, do machine learning over it, that kind of stuff.
But it goes all the way down to the startups.
We partner with Sam Altman's Y Combinator here down in the valley.
Just right nearby is Mesosphere and I've had a young rock star dev who's been embedded down here coding away, checking in code into the kernel to help Mesosphere have the Mesos algorithm be able to schedule both Linux and Windows workloads.
So that kind of the first time there's been cluster management for their product, not for a Microsoft product.
That's great. So basically working with all of these, through working with all these different partners, you really get to see what's coming on the next horizon at both large organizations and small organizations.
So tell us a little bit about what are some trends you're seeing emerging that you're personally really excited about?
Well, there's just so many kind of patterns that we end up seeing.
You have to kind of break it into a couple different areas.
Like there's patterns in big data, there's patterns around IoT, there's patterns around what's happening with machine learning.
You know, those, I would say, those are actually areas where we see the most projects.
Often together. Like one of the most interesting things that that I come across is just this huge influx of projects from these top industrial companies who have massive amounts of data.
They're trying to stream it up to the cloud, put it into big data stores, and then do analytics over that.
And that analytics might be things like anomaly detection or predictive maintenance.
But then they want to really take action on those things.
So those are some of the kinds of things that we're seeing a lot of.
That's great. So within Internet of Things, what are some of the trends your team is seeing?
Because that's such a topic that we hear all the time.
But what are some tangible real-life examples of what you are seeing that the audience and those online may be interested in hearing about?
Well, just yesterday I had a little company from the valley up, Richard McNiff's company, I forget the name of it.
They they're trying to understand how to go change the world around advertising.
If you think about advertising in this day and age, broadcast advertising has become very ineffective.
And so what they've done is they've switched to this mode of almost flipboard-like storytelling in their app.
And all of these great stories are starting to show up on their platform.
So we're having that conversation.
We're talking about how they want to get that content out to any platform, including on Windows.
So that was that conversation. Meanwhile, I had the top industrial and electronic signage company from Germany in.
And they have more Internet points of presence across that whole region of Europe than Google or anyone else, because they've got these signs around.
But they've done broadcast.
And that broadcast just isn't being effective. So I said, hey, we should get you two together.
So the two CEOs got together, CTO and CEO got together, and they're excited about a deal.
And that, you know, was kind of yesterday.
But I think it's indicative of this world of IoT. It's not just about...sometimes people think about it as, you know, Nest and smart things.
Some people think about the kind of world of drones and things like that.
The place, I think, the largest amount of data is in these existing businesses, like the Rockwells, the GEs, the oil and gas, the pharmaceuticals, where all this stuff is flowing around.
Mm-hmm. That's great. That's great. So changing gears a little bit.
You've been at Microsoft for 22 years. You guys rode a huge wave up. Yep. And then you've kind of stayed steady for a little bit.
But then now you're kind of have a new revolution going on with Microsoft.
And I know Andy McAfee just described you as potentially getting kneecaps.
But tell us what... Oh, it's fine.
Yeah, yeah. Tell us what it's like...what's it been like seeing all the different leadership changes?
And how has that impacted Microsoft in setting you up for the future?
Well, as you know, early on Microsoft was one of the companies that helped pioneer the use of personal computing.
And I had the opportunity to work really closely with Bill.
And that was a very energetic time, lots happening. I got to help build the version of Internet Explorer that was the version before people got mad at us.
And I got to do this crazy thing called Visual Studio and .NET.
So those were super fun times. After Bill kind of stepped back, Ballmer came in and the focus of the company was very much on bringing those technologies into the enterprise.
And I think if there was a reason that the company kind of started to lose its way is that the world was changing around us in terms of mobile and Internet.
And so we have a new CTO in Satya. This is a guy who grew up doing mobile, Internet, very comfortable in that world.
I kind of think of him as an Internet citizen.
Whereas Bill and Steve, they loved Windows. And so I think it was a little hard for them to think about things like what we're doing now with bringing all of our apps across iOS and Android.
In fact, I would say most of the kind of work that my team does just kind of wouldn't have been possible even back then.
And I think some...he mentioned the evil empire.
The Mesosphere guys were...actually told us this.
They thought of Microsoft as the crazy evil empire. And now we're sitting there checking in code together, and they're having a great time working with us.
Lots of partnership. Just like with Cloudflare. Well, we love working with you.
So it sounds like there has been a big change. So the perception in the media, and you feel it internally as well.
Like even little simple things like open source.
My team primarily works on open source. What did that conversation sound like when you had open source conversations internally five years ago versus...
Well, you heard bummer, right? That was anti-capitalist. It was the end of the world.
We had swarms of lawyers who would descend on anybody who even thought about doing it.
Not anymore. Like I said, almost all the work my team does is out on GitHub.
We do it in partnership with lots of other people. That's great. That's amazing.
It's amazing to hear that a 100 ,000-person organization can change so drastically under a certain leadership.
That's an incredible business school study.
So for the developers in the room, or who are online, or companies who all of a sudden are saying to myself, oh, wow, we want to start working with Microsoft.
This is cool.
John is really cool. How do we start working? How do other companies think about engaging with Microsoft?
I mean, you guys are a massive organization, lots of different verticals.
What do you say to those who want to get started? Well, there's a lot of documentation and other kinds of things out there.
It really depends on kind of where you're coming from and where you want to get to.
A lot of companies we work with, for example, are trying to create solutions that have very broad reach.
And so they're often looking at the Windows platform. We're seeing increased excitement around what we've done recently with Windows 10.
So we're getting a lot of conversations around that.
But I would say, really, right now, the energy for many companies is around partnering with Microsoft in the enterprise space.
We've got an incredible number of assets in terms of Windows Server and Active Directory, what we're doing with Azure.
The Office suite is very, very popular among large organizations.
And so what people are doing is they're writing applications that connect to all those APIs, and they're leveraging Microsoft Salesforce to actually go be effective.
It's probably the number one thing startups come and talk to us about.
Cool. Great. Good. Is that help?
Yeah, definitely. Absolutely. So if you're an entrepreneur sitting in this room or watching online or the next Y Combinator batch, what advice would you give them in terms of areas to focus on for the next five years, looking at emerging trends?
What aren't people thinking about that they should be? What aren't they thinking about?
Well, there's a couple of areas where I think people may underestimate how rapidly change is going to occur.
We just heard from the guy from Qualcomm, who we work closely with, I think the ability to have high bandwidth, low latency connectivity, just make the assumption that it's going to be ubiquitous.
And what will that change? And he talked about some great examples of real-time control of devices.
I think deep learning, which I think a lot of people are pretty familiar with, is just going to sweep through the industry.
It's already profoundly changing many of the products that existing older line industries have in place.
And the example I might use is, again, back to the General Electric's, the Rockwell's, the Siemens of the world.
They've got petabytes of data going back 30 years on motors that are sitting out in the middle of oil fields.
And they just don't.
They don't know what to do with it. And so the ability to bring that in, do anomaly detection on it with deep learning, and then do the predictive maintenance, the amount of money that that can save an organization, because one of those oil wells goes down, it's a big deal.
So the thing I would suggest to people is just think about kind of those technologies that, in the Moore's Law kind of way, will continue to advance and look for the interstitials on them.
I have one more question. So if people have questions, start thinking about them now, because I'm going to turn to the audience, because I definitely want to give people an opportunity to chat with you.
So you've been at Microsoft for 22 years.
You may be there another 22 years, but I won't, who knows.
But let's say, you know, if you could write your ideal job for the next five years, what would be some of the characteristics?
What would it say for you?
There's kind of two things that I've been, I'm actually kind of thinking about that.
I love the current job. So I don't see any reason to go change the ability to go work with all these awesome companies, help them solve problems.
That's been fun. The one thing I worry about in the role is there's so much that we're doing.
And the consequences, I have to, I only really get the opportunity to participate in these projects on relatively short duration sprints.
And as a technical person, I love to be able to just kind of close the door and disappear for a month or two and play with a new piece of technology.
And I just don't get the opportunity to do that in this kind of role.
So there's a, one of the projects that I launched when I was on the Windows Server team was around the REST APIs for Microsoft Office.
We call it the Office Graph.
And we've just been releasing that. You know, if you've ever used the Facebook APIs or any of those kind of social network APIs, we've got that now for enterprises.
It's pretty cool. You can navigate through things like an organization down to your files, to the owners, to their boss or whatever it happens to be and, and get easy access to that.
I'd love to take that technology and really push it, add the machine learning in and a bunch of other fun things.
So that's another thing I'm thinking about.
Nice. That's great. Are there any questions in the audience?
Well, I have other questions. Oh, there is one. Here, we'll let, we'll let the audience.
One of the speakers earlier today was the president of Estonia who was talking about their digital stuff and their digital identity stuff for their citizens and some of the most advanced in the world.
And I know Microsoft and you were part of leading this with working on information cards and...
Yeah, we used to work on that together.
Yes. So I'm curious what, like, that didn't succeed in the market, which is what markets do.
They tell you, they vote. But what are you, what is Microsoft doing now in that space and what should we be looking, like, where is that going since we're on the next five years theme?
So in terms of what we're doing, we've made a pretty significant investment in trying to bring together the public identities that people might associate with a Microsoft account, but also the identities that people have in schools and in businesses, and we're trying to make that much more seamless.
The thing that information cards did, this project that we had worked on a while ago, was it was specifically looking at the challenges that people face being phished, being, how do you really know the reputation of the folks on the other side of the communication?
And, you know, we still struggle with this ability to get spoofed inside of browsers and so on.
So I'd love to see the world move increasingly away from passwords.
In a lot of ways, I think that the app developers have the opportunity to go do this.
Now that we have mobile devices that have biometrics on them, they're typically something that we don't lose.
You combine that with a pin and I think you've got a very solid foundation for the upside for the clients to go identify to the servers, and then because they've already established those connections, we can use that to keep things running well.
So we've built some new technologies into Windows 10, Windows Hello and so on that you may have seen, that's intended to do just that.
For example, in the Windows Hello demos that we've been doing, we use an Intel camera that looks at the 3D person, looks at iris, other things like that and makes a determination about whether that's you and lets you log into the device.
Do the early beta customers of that, do they like that or does it scare them?
I think it's the same reaction people have with the fingerprint readers.
It's pretty common now that everybody uses them.
You have to be a little sensitive about where that information gets transmitted.
As long as it's maintained locally on the machine and not transmitted up to the cloud where it could inadvertently be used for nefarious purposes or leaked or whatever, I think people are okay with it.
Great, that's great.
This comes back to what the President of Estonia mentioned is data integrity.
Yes, exactly. Do you guys spend a lot of time thinking about data integrity at Microsoft within your team?
Yes. I would say one of the very interesting trends I've seen over the years is if I were to roll back the clock five years and I would go talk to large companies about using the cloud to do computing, the assumption that they made was, wow, we will never release our secret data up to the cloud.
It all has to be locked in on premises because that's the safest place for it.
What most companies have since discovered is that they are incredibly at risk.
I've worked with major oil and gas companies who are compromised all over the place.
Their directories, their identities have been essentially penetrated and sold to third parties.
What those companies have come to us and said is, hey, companies like you or Amazon or Google where you're running these things at scale, you're under attack every single day.
And so even though we may not be perfect, we spend an enormous amount of time looking at those systems.
I think people know that Windows ends up being the most attacked operating system simply because of the large numbers.
As the other OSs have grown in numbers, they see those same kind of attacks hitting them.
And companies that make those products ultimately have to step up to do the security and prevent the problems.
That's great. That's one of the reasons that we love working with the Cloudflare guys.
One of the best things about technology is the rate of change.
Things that were really popular five or ten years ago come out of fashion and new companies emerge, which is why it's so great to be an entrepreneur in the tech industry.
And I love that Microsoft is having a new chapter open up. And it was great having you here today.
Very excited for everything that you and your team, the organization, are looking forward to in the future.
And I think that we're going to see a lot more from Microsoft going forward, which I think is great.
Thanks. All right. Thank you, John. FindLaw is a Thomson Reuters company.
They're a digital marketing agency for law firms.
Their primary goal is to provide cost-effective marketing solutions for their customers.
My name's Teresa Jurisch. I'm a lead security engineer at Thomson Reuters.
Hello. My name is Jesse Haraldson. I'm a senior architect for FindLaw, a Thomson Reuters business.
So as the lead security engineer, I get to do anything and everything related to security, which is interesting.
FindLaw's primary challenge was to be able to maintain the scale and volume needed to onboard thousands of customers and their individual websites.
So the major challenge that led us to using Cloudflare is Google was making some noises around emphasizing SSL sites.
They were going to modify the Chrome browser to mark sites that weren't SSL as non-secure.
We wanted to find a way to, at scale, move 8,500 sites to SSL reasonably quickly.
And doing that to scale up to speed with our operations, it needed to be something that was seamless.
It needed to be something that just happened.
We had tried a few different things previously, and it was not going well.
And we tried out Cloudflare, and it worked, just kind of out of the gate.
Like us, FindLaw cares about making security and performance a priority, not only for their customers, but for their customers' customers.
Faster web performance means having customers who actually continue to sites.
It means having customers who maintain and go with the sites.
65% of our customers are seeing faster network performance due to Argo.
So that's an extremely important thing. The performance, the accuracy, the speed of that site fronted by Cloudflare is super essential in getting that connection made.
I like the continued innovation and push that Cloudflare brings.
Cloudflare is amazing. Cloudflare is such a relief. With customers like Thomson Reuters, FindLaw, and over 10 million other domains that trust Cloudflare with their security and performance, we're making the Internet fast, secure, and reliable for everyone.
Cloudflare, helping build a better Internet.
You run a successful business through your e-commerce platform.
Sales are at an all -time high, costs are going down, and all your projection charts are moving up and to the right.
One morning, you wake up and log in to your site's analytics platform to check on current sales and see that nothing has sold recently.
You type in your URL, only to find that it is unable to load.
Unfortunately, your popularity may have made you a target of a DDoS or Distributed Denial of Service attack, a malicious attempt to disrupt the normal functioning of your service.
There are people out there with extensive computer knowledge whose intentions are to breach or bypass Internet security.
They want nothing more than to disrupt the normal transactions of businesses like yours.
They do this by infecting computers and other electronic hardware with malicious software or malware.
Each infected device is called a bot.
Each one of these infected bots works together with other bots in order to create a disruptive network called a botnet.
Botnets are created for a lot of different reasons, but they all have the same objective, taking web resources like your website offline in order to deny your customers access.
Luckily, with Cloudflare, DDoS attacks can be mitigated, and your site can stay online no matter the size, duration, and complexity of the attack.
When DDoS attacks are aimed at your Internet property, instead of your server becoming deluged with malicious traffic, Cloudflare stands in between you and any attack traffic like a buffer.
Instead of allowing the attack to overwhelm your website, we filter and distribute the attack traffic across our global network of data centers using our Anycast network.
No matter the size of the attack, Cloudflare Advanced DDoS Protection can guarantee that you stay up and run smoothly.
Want to learn about DDoS attacks in more detail? Explore the Cloudflare Learning Center to learn more.
They all need to be fast to delight customers.
What we need is a modern routing system for the Internet, one that takes current traffic conditions into account and makes the highest performing, lowest latency routing decision at any given time.
Cloudflare Argo does just that. I don't think many people understand what Argo is and how incredible the performance gains can be.
It's very easy to think that a request just gets routed a certain way on the Internet no matter what, but that's not the case.
There's network congestion all over the place, which slows down requests as they traverse the world.
And Cloudflare's Argo is unique in that it is actually polling what is the fastest way to get all across the world.
So when a request comes into Zendesk now, it hits Cloudflare's POP, and then it knows the fastest way to get to our data centers.
There's a lot of advanced machine learning and feedback happening in the background to make sure it's always performing at its best.
But what that means for you, the user, is that enabling it and configuring it is as simple as clicking a button.
Zendesk is all about building the best customer experiences, and Cloudflare helps us do that.
Microsoft Mechanics www.microsoft.com www .microsoft.com www.microsoft.com So what we're going to talk about is building the fastest, most secure Internet possible.
And these two guys claim that it's much closer than we think, so we're going to try and drag that out of them.
Eric Rescora is a fellow at Mozilla, and his sort of rival and friend is Jana Iyengar at Google.
So these guys both work on different browsers, but as we're going to see, in order to make the Internet work more quickly and more securely, it requires a lot of cooperation between different players.
So what I was going to ask you guys is one of the things that's happened is the Internet seems to be going encrypted.
Why, and why now? Why?
It should have happened a long time ago. I think there's been a lot of movement in trying to make content encrypted and quickest pushing further towards making even transport information encrypted and so on.
I'm not sure why. I don't know how to answer why now.
I think it's become clear over the past five, ten years that there are a lot of bad actors on the Internet and that they would like to look at, tamper with, etc., screw with your traffic.
And as people have realized that, they've realized that encryption is the answer to fixing that, and they've started to roll it out.
The other thing that's happened is that, well, the technology has made it much easier, so things are much faster and encryption is much cheaper comparative to other compute power.
And there's also been this sort of, I call it a coalition of the willing, an effort across a number of companies, Google, Mozilla, Cloudflare, Cisco, Akamai, a pile of us, just to bring encryption everywhere.
And so we started to systematically tear down all the barriers to rolling out encryption, everything from getting certificates to performance to latency and as we do that, it becomes more viable and therefore more imperative.
The performance thing is quite interesting, right? Because one of the traditional objections to using encryption for anything was, it's going to be slower, and we were at the same time trying to drive the web to be much faster.
That seems to not be true anymore.
What's happened? Two things. So the first is, as I said, computers have just gotten much faster and cryptographic algorithms have gotten much faster.
So elliptic curve cryptography is much faster than old cryptography.
AES is much faster than triple DES. So it's used up less compute power and comparatively, it's a much smaller slice of what your computer needs to do.
Obviously, it doesn't make the speed of lightning faster and round-trip latency used to be a big issue.
But again, there's been this systematic effort over the past few years to remove as many round trips as possible from the protocols so that we're getting to the point now where encryption doesn't require any more round trips than non-encryption, and that's what's happening with both QUIC and TLS 1 .3.
So just for the audience to understand that, you just said something which is quite technical, which is right that it requires no more round trips.
So just what was happening with TLS, SSLs, the secure connections, and what's happened now?
So pretty much all encryption channel protocols involve a setup phase of a handshake where you send packets back and forth to establish the cryptographic keys that you use to encrypt the traffic.
And so that involves overhead in terms of round trips back and forth between the client and the server before you can start sending any data at all.
What people have realized over the past few years is that it's possible to, once you sort of have an initial context and you've ever had any contact with a server, you can save state, and you can use that to start sending data right away encrypted just in the place where you would have sent data if it weren't encrypted.
And so that's something, let's say it's sort of an old idea, but it's been evolved in QUIC and TLS 1.3, and so now people are just starting to do that.
And so that means that there's initial setup, like the first time you contact Google, but after that, from then on, you can just go immediately.
So this is TLS 1.3.
This is the new version of the confusingly named protocol SSL, which is sometimes called TLS because of history.
So that's got a lot faster, but Jana, you're working on something which fundamentally rips up a lot of things that are there, right?
So the Internet as it's built today is built on layers, and those layers have allowed the Internet to scale in a very successful manner because each layer is unaware, essentially, of the layer below it.
So I've got TCP, which allows me to communicate between two machines, and along comes somebody who wants to build HTTP, who happens to be in the audience, and says, okay, I've got TCP, and now I can just put this protocol on top.
Now, you've done something completely the opposite, which is to say, but that introduces some problems, that independence of the layers.
So tell us about QUIC.
What are you trying to do with this consolidation? Indeed. So I can start with just briefly describing what QUIC is.
So QUIC is a super-fast, completely encrypted transport that we're building at Google for replacing the stack, as you mentioned, of TCP, TLS, HTTP2 on top of it.
And what we get for it is we are able to eliminate a lot of the inefficiencies in the layering.
So you have round-trips, for example.
Just one example. We were talking about round-trip times earlier. TCP has its handshake round-trip, and then there's TLS, which does its handshake.
If you want to eliminate those round-trips, you have to eliminate them in both layers.
And they're independent layers, as you pointed out earlier, which means there are different players involved, there are different vendors who are shipping these products, so you have to ship it out in various different places, and you have to get the stack to do the right thing completely.
So that's one sort of set of inefficiencies.
With QUIC, we are able to establish a zero round-trip connection, which is secure, because it has equal security to TLS 1.3 in terms of establishing the connection in zero round -trip.
And we get that with one protocol. So making changes to this means that what we are trying to do here is pull in everything that you need to deliver HTTP content and turn that into one protocol so that whatever we need to do to optimize for web, for HTTP, we can do with this one protocol and make it work for HTTP instead of having to change multiple things down the stack.
Now, this is not an experimental thing. This is actually being shipped by Google, right?
Indeed. It's actually deployed now quite widely, and if any of you have either gone to a Google service today, including YouTube, on your desktop, you probably used, well, using Chrome, you probably used QUIC.
If you used your Android device to go to a Google service using one of several apps or Chrome or YouTube app, you probably used QUIC.
So yes, it's deployed quite widely. If you're using any of these things I've just described, there's about an 80% to 90 % chance that you're using QUIC.
And what's the experience been with rolling out?
Because now you've thrown away half the layers of the Internet. How is that operating?
So you're obviously relying on some other layers that still are in existence, right, in order to get the packets there.
So QUIC runs on top of UDP. It replaces the functions of TCP, TLS, and the HTTP2 layer on top, but it runs on top of UDP.
So operators see this as an increase in UDP-based traffic on the networks, but they can't look into the headers because it's fully encrypted.
So they can't really make sense of what's going on inside the packets and the transport, which is very much by design.
And we've been talking to operators constantly.
We make sure that they are informed about the increase that they're about to see in UDP traffic.
And I'd say that for the most part, people have welcomed this change.
We are now about to kick off a standardization effort at the IETF where there are multiple players coming in.
Akamai has gone public, saying that they have now adopted, that they're working towards deploying QUIC.
And there is a lot of interest in the industry because everybody wants to see this happen.
This is generally just goodness because it's more secure and it's faster. And my understanding is it's faster in some of the most difficult environments.
So we talked this morning with Ilya Grigorik about long latency mobile networks and things like this.
Why does QUIC have an advantage in that environment?
So there are a few reasons why. And difficult environments, I'll add one more to that, which is we've talked about developing regions and bringing Internet connectivity to the next billion or the next few billion.
And that connectivity is usually pretty crappy.
It's over 2G or fairly poor networks. And QUIC is, by being a transport that is not embedded in the operating system, it's basically we built it as something that's embedded in the user space.
And it allows for us to ship changes to the client, make changes to the server, and not have to rely on devices in the network to change.
Now this is the fundamental difference from TCP where the TCP wire format was visible in the network and there are a lot of network devices which do not allow TCP to evolve further beyond what it currently does because they have a certain presumption of what TCP means.
Because the entire packet is encrypted in QUIC, devices cannot see inside them.
Only the endpoints see.
So we can rapidly change the transport at both ends. This allows us to experiment and to deploy new mechanisms to adapt to whatever conditions we see.
And we are able to deploy changes that basically are betterments to condition control, to loss recovery and other transport mechanisms, betterments over TCP, and we're able to deploy them rapidly to all of these places as well.
You mentioned in what you were talking about there the fact that it's entirely encrypted and operators cannot look inside the packets.
Eric, we were talking last night about sort of this dirty little secret of the world, right, which are these things that people call middle boxes, which are devices that have been looking at our Internet traffic for a long time and sometimes modifying it.
And it's not intelligence agencies, it's all sorts of other people.
Can we talk a little bit about that as we open the box of this secret?
Sure. I mean, there are essentially anybody who's unpathed between you and someone else like tends to look at your traffic.
And they do it for all sorts of reasons. They do it for congestion management, they do it for traffic prioritization, they do it for security, anti-virus, et cetera, stuff like that.
Hotels and airports do it so they can get in the middle and show you their stupid captive portal.
Carriers often do it so they can figure out what kind of advertisements maybe they should be showing you.
And so this happens all but in layers. It happens at the transport layer, as John was indicating, which makes it hard to roll out new kinds of transports.
It happens at the HTTP layer, which makes it hard to roll out new kinds of things in HTTP.
And then it happens sort of at the application layer, like the conceptual application layer above HTTP, where they do things like shove a captive portal in your face.
And sometimes this is done passively, and they just look at things, and sometimes it's done actively where they shove things in.
And so one of the things that, in general, we're trying to do is make that as difficult as possible, at least for people with whom you have no relationship at all.
So there'll still be enterprises who think they need to get into the data that their own users do, and that's not something I think anybody who works in this field is particularly happy with, but we recognize it's kind of a reality.
But at the very least, we want to be able to set up a situation where when I talk to Google or I talk to Cloudflare that the network I'm on that I have no relationship with except that they're providing me wireless, that they can't do anything about traffic.
For example, it's quite common for school districts to have boxes which do filtering, and in order to do that, they have to deal with the encryption that's there.
Right, so generally enterprises and schools and people like that, they think they own the endpoints, or at least they own the network and they're restricted.
And what they'll do is they'll require that the endpoints be configured to allow them to inspect.
And there's a variety of ways of doing it, but the typical way is they impose a new certificate on the endpoint that allows in demand in the middle of traffic.
And as I said, this isn't something that anybody who kind of works in this field is particularly thrilled about, but we recognize it's a reality.
And this technology we're talking about doesn't change that scenario, but it makes it much easier to differentiate between people who you call legitimate inspection devices versus illegitimate inspection devices.
I'm assuming there are also some downsides of this.
One of the ways in which the Internet has got a lot faster is by doing caching.
And it's quite common in mobile networks to actually cache.
If everyone's looking at the New York Times, well, let's just cache that without telling anybody we're caching it.
What's the effect, do you think, of this rolled out of encryption going to be on that sort of thing?
Can I digress? Yeah, I mean, obviously this makes the kind of caching impossible.
The data is actually quite equivocal on how much that caching helps.
We spent some time trying to, in ITF, trying to figure that out.
I think maybe two, three years ago we started realizing this was the trend.
And we started trying to figure out, do we still need caching?
And if we do need it, what are we going to do about it? And there's some interesting work going on in what's called blind caching about how to do encrypted caches that don't allow traffic inspection but still allow caching.
That's very preliminary work.
And as I say, there's still an open issue about whether or not this is actually an advantage.
In many cases, I think there are two things. In many cases, the operators who do these kinds of things either claim that they have an advantage for users, but in fact they don't.
And maybe they know that, maybe they don't.
And in many cases, they had an advantage for users 10 years ago, but the environment has changed and now they don't.
As more and more websites get dynamic, caching, that kind of caching, and CDNs become more proper, that kind of caching becomes less valuable.
I'll add that this is a very interesting tension, I think.
And there's definitely a lot of conversations happening in this space right now about how to cache with encryption.
But this really shouldn't be, sort of, they should not be at odds with each other.
Arguably, the premise should be that whatever we are doing, including caching, should respect the security properties that we want.
That's where we want to get. Let's just dial forward a bit.
So QUIC starts an IETF process. People like us and others roll it out. It suddenly becomes the dominant protocol.
One of the things that's interesting is that having, sort of, torn up the foundations of TCP and TLS and HTTP2, you have a chance to keep changing.
Do you think QUIC becomes ossified like TCP has, or do you think there's some sort of evolution possible?
The hope, certainly, is that there will be evolution.
QUIC has, as one of its premises, evolvability, meaning that it gets us in two ways.
One way is by making it a user space protocol. We no longer rely on operating system updates to various operating systems to get shipped.
This can be shipped as a user space library and can get automatic updates in user space, and that's much faster than waiting for other system updates.
Second, by being entirely encrypted, the transport mechanisms can evolve without having to worry about metal box interference or metal boxes basically falling over because they had expected a certain particular behavior of transport.
So, in both of these ways, the mechanisms are there to allow for QUIC to keep evolving.
As we go through the ITF process, the plan is to retain that evolvability in the protocol and to allow for distributed experimentation with it.
So, yeah, that's certainly the hope. I think so far we're seeing, I think, you know, when these protocols were being designed 20 years ago, I think we didn't actually have a great idea how to build things evolvably.
So, A, you had to worry about these kind of inspection endpoints which we're trying to get past with encryption, and B, if you look at, say, you know, a number of the pieces of TLS that were intended to be evolvable we found aren't as evolvable as they were and we have to sort of fight with that.
And I think now we're learning how to build systems that do a better job in the face of endpoints not evolving as well as you'd hope.
And I think you see this in QUIC where QUIC sort of started out with being incredibly monolithic and with its own custom crypto and we're evolving that to use TLS 1.3 instead and we see that that's actually working quite well and that's because I think the people who designed it had a good idea of how to build systems that could plug and play different pieces as they needed to.
So, I hope that in 10 years we won't be having to fight with oh, I want to roll out a new feature in QUIC or in TLS and then 2% of endpoints blow up when I try to give it that.
Right. That's certainly my hope as well. I wanted to touch on the difference between HTTP 2.0 and QUIC because HTTP 2.0 is now out there being widely deployed and much of the promise of HTTP 2.0 was that it would give us high performance for websites, improved over HTTP 1.1 and it was also rolled out fully encrypted as well.
So, what's the step up that QUIC is going to give us that we haven't got with HTTP 2.0?
There's just directly there are two steps up or two separate steps.
One of them is that with the HTTP 2.0 stack you have HTTP 2.0, TLS and then TCP below it.
You get all the beautiful multiplexing and everything else that you get in HTTP 2.0 but then you serialize everything and shove them down one single serialized TCP connection, one single byte stream.
With QUIC we are able to eliminate the delays that can happen, head-of-line blocking delays that can happen in TCP because we maintain the parallelism across trees, within the connection all the way through to the other endpoint even at the transport level.
The second step up is again as I was pointing out earlier the fact that TCP connections can be terminated in the network that people can step on TCP connections.
For example, do things like limit the receive window to limit a connection's throughput.
Things like that cannot be done in QUIC fundamentally just because the transport headers are encrypted.
And those are I think of them as two separate steps up.
In addition, QUIC has also sort of has this notion of a connection that is divorced from the standard typical 5 table that we have for TCP which allows it to do things like multi-path and use multiple network interfaces at the same time.
This allows us to do things like if you're familiar with MPTCB or multi-path PCB like that just off the bat.
So do you think in a 5G environment we might be sending packets over multiple cell towers as well using something like QUIC?
If they are exposed to the endpoint if the interfaces are exposed to the endpoint then yes certainly we could be doing that.
Eric. I just want to bring out one point that John had just made.
So as we roll out H2 what we're seeing we absolutely see performance improvements but what we see is in aversive network conditions they mean things with a lot of loss the multiplexing starts to really break down because I've had a line blocking.
And so one of the huge advantages of a transport that isn't stuffed on top of TCP is that it'll be much more resistant and degrade much more gracefully as the network starts to degrade which is incredibly important in these developing nation environments and in mobile environments.
And we've actually seen evidence of that. We've seen QUIC really sustain itself through high loss environments in remarkable ways and it's definitely helpful that we're able to build new fast recovery loss recovery mechanisms based on what we've learned in the past.
I think it's worth just describing for the audience a little bit what this head of line blocking problem is, right?
So if you could let me just grab my microphone so this will work. If we think about in the TCP environment what's great about it is it guarantees delivery of data.
And it does that so if something gets lost it will retry and it comes in order as well.
How does that generate a problem? It sounds great. So that sounds fantastic except the ordering requirement may be too stringent for applications that don't completely require that.
Specifically HTTP is an application that has when a client speaks to a server and is downloading data it's usually multiple objects and there is no ordering requirement across these objects.
You may want to receive one object all the bytes for one object in order but not necessarily have the objects themselves ordered.
So if there's a loss of a packet that contains data from one object delivery of a subsequent object also gets blocked at the client because TCP doesn't know that these are two different things.
It thinks of this as one single byte stream.
QUIC allows you to have multiple streams within one connection so when there is a loss in the network of a packet that contains data from stream one say data that's being received over stream two within the same connection will get delivered to the application.
So that basically avoids subsequent objects from being blocked behind an object that cannot be delivered right now because QUIC has an understanding of the structure within the set of requests that are going out.
Great. Eric, I wanted to touch on TLS 1.3. Why have we done 1.3 and what's better than 1 .2?
Right. So it turns out I'm always appalled to realize I've been banging away on this SSL TLS stuff for like 20 years now.
So when SSL was first designed 20 plus years ago we didn't actually understand that much.
The people that did it were very smart but didn't understand as much of it as we do now by designing protocols.
And so some mistakes were made and also when people thought performance they really thought about CPU performance.
They didn't think about latency because latency was not the dominant cost of running any kind of encrypted connection.
And so what's happened is computers have gotten much faster and but the speed of light hasn't gotten any faster and unfortunately we haven't figured out how to do anything about that.
So round trip latency has become much more important.
And at the same time there's been a bunch of security problems with SSL and TLS that were that sort of baked in from the beginning but nobody sort of understood how to deal with the time or didn't really understand how to methodology to find.
And so things have gotten much better and so it sort of became clear about four or five years ago that it was time to do something a really substantial revision.
And so and so we started looking at both the best sort of academic practice on how to build secure protocols and the best ideas people had about how to make fast protocols and try to put them together into one protocol.
And at the same time Google started working on it quick and so we took inspiration from there as well as some other pieces of work and so the idea is to build some of the most modern low -latency protocol we know how to build and deploy it to as much of the web as we possibly can.
And it was really fortuitous I think that it could be developed at the same time so we're able to sort of bring those efforts together.
It was experience with this and so we could certainly share that experience and learn from it.
What I think is exciting about this is that there's no compromise here in terms of speed or security.
We're going to have both of those things that's going to be the default.
When do you think it's going to be socially unacceptable to visit a website that's not encrypted?
Isn't it already? It should be if it's not. I think Chrome is making an effort to warn people that you're visiting one of these old -fashioned HTTP websites.
I think we're a ways away. Our telemetry shows that we're getting to roughly about 50% of HTTP hits or HTTP pages are encrypted.
About 65-70% of transactions are encrypted because of things like Gmail and Facebook which are all encrypted and do a lot of back and forth even though it's only one page.
We're quite a ways from 100%.
I think what we'd like to see is a situation where it's embarrassing for you to hold a new property that doesn't anything substantial isn't encrypted.
I think one thing that we try to say as security people is you may think you don't need encryption but you really do and it's just hard to think about a threat model where you don't have it.
I sort of wanted to caveat something I said earlier which is this is a huge effort to build something this new and we're going to make some mistakes.
Don't think that we've solved all security problems for all time.
We're just trying to clean up the swamp and make things as best as we can do now.
We're learning what it means to do zero RTD and security with zero RTD and that's we're going to trip.
We have to be able to pick ourselves up. I want to come back to just one thing you said earlier which is the question about TLS and making it unacceptable.
I think that companies like Cloudflare play a very important role here because making it unacceptable for origin servers to actually have HTTP.
I think that can be done if we have a number of people and many of them might be in this room in fact, make it unacceptable for origin servers to serve that.
We've used up our time.
I know it's flown by because it's fascinating but we do have time for questions from the audience.
Are there questions? There's a microphone somewhere.
So one of the most frustrating aspects of the net can be the captive portal which blocks your computer and blocks all the programs which normally run on it when you happen to walk through a part of an airport and you say encryption will make it difficult for those captive portals to actually find a way in to get to you, to get you to click on the box and open up that.
How is progress on protocols to do the right thing?
Is the best practice for if you're designing, if you want to do the right thing in your airport, what should you do and how is that coming along?
Yeah. That's something we've not made as much progress on as we were hoping to.
The unfortunate situation appears to be that the captive portal vendors are not actually interested in cooperating with browsers or operating systems in allowing us to detect captive portals.
So all the operating systems or at least the operating systems, I believe Microsoft as well, and Android, have detection of captive portals.
What they do is go back to some site they know is intact and they try to talk to it.
Sometimes that works and sometimes the vendors actually explicitly whitelist those sites so they can show you the captive portal.
The reason they do this is they want to force you out of the restricted sandbox that the operating system puts you into the captive portal.
I think that's one of the reasons.
I think that's one of the reasons.
I think one of the reasons. I think one of the reasons.
it's a very frequency limited . There have been a lot of great great say that things are anything like perfect or anything like what we'd like to have, but I don't think it's as bad as people seem to let on.
There have been a number of efforts along these lines, they sort of all, again it's sort of not a complete solution but they're pieces of solutions.
So on the client side and the side of self-help we have things like pinning, HPKP or HSTS which is sort of a halfway solution for people, you know, to stop people from, partly stops people from accepting totally bogus certificates.
And on the systematic side we have efforts like certificate transparency.
So I think when you put those together we're coming closer to a solution, probably there's some more iterations to really know what to do and have a perfect answer.
One thing I do want to say is any authentication system is going to have misissuances.
You just cannot be authenticating billions of people and not have fraud.
And this is a risk management issue about containment, it's not an issue about perfect security.
Okay, well thank you so much for coming and talking to us about these things, it's great to hear from people who are really rebuilding the guts of the Internet to make this stuff work.
Jana, Eric, cheers. Cheers.
Cheers. Cheers. Cheers. Cheers. Cheers. Cheers. They'll provide bits and pieces that now you have to kind of cobble together to build an amazing product.
Our focus now is how do we simplify and streamline that by providing a deeply integrated, simple to ease and easy to use solution.
A big part of what we do at Cloudflare is as we focus on helping build a better Internet is take complicated things and make them simple.
And to enable them to just literally be able to go to Cloudflare, to log in, to point their video asset at Cloudflare, and then on the other end be able to pull a player out of Cloudflare and place it wherever they need to be able to deliver the video.
And that's it.
There's a triplicate where you can do something either well or fast or cheaply.
And so we're striving for all three because we really need it. We need it to be really good because otherwise why would anyone use the service?
You got an entire Internet out there, use something else.
We need it to be fast because people have no patience.
And we need it to be cheap enough that we can stream to millions of users without it becoming uneconomical.
So you have to get all three and Cloudflare's a really important part of offering all three.
If you want to deliver a video to anybody on the globe, there really is no better network to put it on than Cloudflare because we can guarantee the highest quality experience to somebody who is in New York City and someone who's in Djibouti and someone who's in Sydney.
Microsoft Mechanics www.microsoft.com www .microsoft .com What is a bot?
A bot is a software application that operates on a network. Bots are programmed to automatically perform certain tasks.
Bots can be good or bad. Good bots conduct useful tasks like indexing content for search engines, detecting copyright infringement and providing customer service.
Bad bots conduct malicious tasks like generating fraudulent clicks, scraping content, spreading spam and carrying out cyber attacks.
Whether they're helpful or harmful, most bots are automated to imitate and perform simple human behavior on the web at a much faster rate than an actual human user.
For example, search engines use bots to constantly crawl web pages and index content for search, a process that would take an astronomical amount of time for any human user to execute.