The Future of Mobile Content and What it Means for Publishers and Advertisers
Originally aired on February 24, 2021 @ 4:00 AM - 5:00 AM EST
Best of: Internet Summit 2016
Session 1 The Future of Mobile Content and What it Means for Publishers and Advertisers
- Brendan Eich - President & CEO, Brave Software
- Malte Ubl - Tech Lead, Google, The AMP Project
- Michelle Zatlyn - Co-founder and COO, Cloudflare
Session 2 What Can We Expect from the Internet in 2020?
- Ilya Grigorik - Web Performance Engineer, Google
- John Graham-Cumming - CTO, Cloudflare
English
Internet Summit
Transcript (Beta)
🎵 Upbeat Music 🎵 Hey!
So, today has been great, but the one thing missing is disagreement.
Everybody has agreed on everything, and that's a little bit boring.
And so I hope that we change that right now.
Although Brendan and Malta promised me no fistfights, so that's good.
So I'm going to start to my far left, introducing Malta. Malta has been at Google for six years.
He worked on something called Google Photos that you may or may not use.
And the people who use it absolutely love it. I see people talking about it all the time, so congratulations on that.
And for the last 13 months has been the tech lead for a new project, a new initiative at Google called AMP that we're going to learn a lot about today.
AMP stands for Accelerated Mobile Pages, and we're talking about the future of mobile.
So very excited to have you here, Malta.
Thanks for having me. Brendan, well, he's way back when worked at a company called Netscape that you may have heard of.
And while he worked at Netscape, he literally created JavaScript.
So he is the creator of JavaScript. For any of you developers who love JavaScript, you can thank Brendan and afterwards run up and find out the whole story behind it and its name and whatnot.
But that doesn't end there.
Then he went on to found Mozilla, another company you probably have heard of.
And most recently has now gone back to the startup world after being at Mozilla for a long time, CTO, founder, CEO for a brief period of time.
He started a new browser company called Brave, and they are about a year old.
So he's trying to create the next generation browser, which is a really hard thing to do.
So welcome, Brendan.
Malta and Brendan know each other because Malta is also a big JavaScript expert.
And so they've known each other for many years. But they kind of have different points of view on the topic today.
So welcome. Welcome. So I thought we'd start by talking a little bit about how the mobile world has changed, just to set the context the last five years.
Go first. Yeah. So we're talking about content, and there's definitely the emergence of, like, social platforms driving a lot more traffic than they used to.
And there's billions of new users across the world.
And Ilya earlier touched about this. There's been also non-change. So, you know, we all have, like, 4G phones, but not everyone has a 4G connection.
Some things stay the same, and latency is still a big thing. And so websites are too slow, apps feel too slow, and that's kind of the thing we're looking at.
Brendan, what's changed for you for mobile the last five years? I used to be on a Gen 1 iPhone for too long.
And it got really slow. But also at Mozilla, we were from the era when the PC was big.
So I think five years ago is when we actually started seriously doing mobile work, and websites still need to do a better job because, like Malte said, there's too much latency, there's congestion, there's packet loss, and it's painful.
Okay. So, Malte, tell us a little bit. What is AMP? What is your mission at AMP, and what do you guys do?
Yeah. So we started a year ago, and basically what we felt was the mobile web really is too slow, and we have to do something.
And when I say me, obviously I work for Google, but we early on, like, reached out to publishers who obviously produce many of these pages, and then we were talking to them, like, what can we do together?
And AMP kind of grew out of this.
What it really is, like, at the very low level, it's a web components library, and it has one element of it that I think is pretty innovative, which is a validator that you run, and then it tells you you actually have a valid AMP page, and if that is the case, then we can kind of guarantee that it's going to be reliably fast.
I mean, you can still mess it up, but it's very likely to be fast. And what that solved, I think, was this issue that we kind of, I think since about 2009, we kind of know how to make fast websites, but we then tried to educate people, and that kind of didn't work.
And so this gave it scale in that now everyone who is an engineer knows how to run a compiler, and the compiler gives error messages, and I just fix it.
That's my job. And that's how AMP works. So it becomes really easy to do that.
And the final thing that we don't talk about that much because there's such a performance story, but there's a very interesting aspect.
So AMP is about putting content out there and let it be displayed inside of apps, inside of platforms in a way that retains monetization control for the publisher.
And what it really does, it gives an API to a website, a way for a platform to interact at runtime with a webpage, and that actually opens up a lot of possibilities that haven't been used all that much but that I'm really excited about.
So just for those of you who don't know exactly what AMP is, take your cell phone out, okay?
And if you open up the browser, you go to google.com and type in something like Hillary Clinton, and you scroll down, you're going to see a carousel come up, and you start to click on these and the pages load super fast.
You can start scrolling through.
This is AMP. You see down in the corner there's a little powered by AMP amplified symbol, but this is what you and your team are working on.
Right. And we're seeing basically like about under a second load time as opposed to about 19 seconds on other webpages.
So it's a very, very, very significant improvement.
Yeah, wow, 19 seconds to a second. I wouldn't have imagined that, but it works.
That's great. Okay, so now that we know a little bit about AMP, Brendan, tell us about Brave.
Why are you creating a new browser? I mean there's a couple of good browsers out there.
A few. I did this already, and over time browsers tend to get big and old.
It happens to all of us. They tend to... Not me, not me.
They tend to also have their owners take their eye off the ball, in my opinion, unless it's a rare bird like Opera for a time or Mozilla when I was there.
Usually they have other, the owning business has other agendas like advertising or operating systems or making shiny devices just to pick three random examples.
And that's a problem because the browser quality starts to drop over time. So Brave is based on Chrome, but we take out all ads, third-party ads, and trackers by default because the system has become parasitic and toxic.
We realize that that hurts publishers, so we are working on ways for Brave users to give money to publishers anonymously and in small amounts.
And we also proposed, we haven't done this yet, a better way of mediating third-party ads, which already are done through script, JavaScript, my fault, on the page.
It was totally accidental, right?
There was the cross-site image, which I think Tim Berners-Lee was against, but Marc Andreessen and Eric Biena just said, oh, let's put it in a mosaic.
And they did it.
It was great. You could hot -link to your friends' cat pictures. Why not?
Cross-site. And then in Netscape 1, there was the first-party cookie through cache or logins, so you didn't have to log in all the time when you restarted your browser or even went back and forward in history.
Just the two of those together made third-party tracking.
You put a pixel on ESPN, you put it on the New York Times, it sets a cookie, it checks the cookie first, if it sees that you've been to ESPN, it knows you went to the New York Times.
This was totally accidental evolution, and it led to, with JavaScript pouring fuel on the fire, this whole ecosystem of third-party advertising, which is now conveying malware onto top publisher sites.
So Brave is trying to stop that, but provide a way for publishers to be economically whole.
And we actually have a bigger vision, which is that you should own your own data.
Users aren't paid for their data right now. They're the product.
If you look around the poker game and you can't spot the chump, you're it.
If you look around and see who's not getting paid, that person's product.
We think you should get paid. It's not that money is everything. It's that it at least shows that you're considered to have bargaining power.
And the only way to do that is to block trackers.
So Malta, you work for Google that makes a lot of money selling ads.
A lot. A lot. Most. And so then we have Brendan here who's saying, well, ads are bad.
I'm going to block the ads. The future of the Internet is no ads.
And what do you think? So I agree with Brendan that there is a problem with display advertising on the Internet.
And I want to expand on the historical perspective.
So most of the ad tech you see today was created in the late 90s, early 2000s.
And everything that we've done since is essentially backward compatible and still fulfilling those old constraints.
And there wasn't someone in 1999 who was thinking, let me start by designing a security model and everything goes from there.
They just didn't do that. They didn't anticipate that being such a great thing.
And so I kind of agree with the assessment in general.
But what I'm actually very optimistic about is that if you go in 2016 and say, how would we actually design this if we started today, that you could build something that wouldn't have these problems.
So for example, Brendan mentioned malware.
No one wants to read in the paper that there's malware on their site.
And there's no actual technical reason that you can't have advertising on the web without also disrooting malware.
Those actually don't have anything to do with each other.
So I think we can actually get to a very, very good state where we finally have to make the leap of breaking backward compatibility and really make a big jump forward.
So you think that there's a security vulnerability in ads, but if we fix that, then users online will be okay with content next to ads.
Brendan, what does that mean for your company? I'd like to see it happen because I don't think it can.
I think the web is an open system. Nobody owns it.
And the reason publishers use third parties is because they want to aggregate audience.
Even the big premium publishers don't know who you are just from the context of the page around the ad or from your browsing on their site alone.
So they really do want third parties to track you and build profiles and try to predict and guarantee audience and yield.
And ad tech is this sort of constant hustle where everyone's saying, oh, yes, we've got the latest header bidding or header bidder wrapping or online behavioral advertising we're supposed to do it too.
Yields were supposed to go up, but they didn't. Money just moved around to a different pocket.
And the system as a whole had these negative externalities like malware, driving people to ad blockers, which you could be a good person who converts on ads and helps publishers make money, but suddenly you get a bad ad and you decide I'm going to use an ad blocker and now you're out, you opt that out of the system and you're not supporting any sites, even the ones you like.
This seems like a hard problem to solve. Google doesn't control the whole web.
Microsoft, Apple, nobody does. And there are a lot of publishers out there still buying ad tech every year and buying this hype.
So I think this is going to be with us for a while.
And so what about apps? I mean, where does apps fit into the whole picture?
Where, you know, what if you just don't need a browser and you don't need AMP because I just get everything from my apps?
So I think people have described apps as these custom, special purpose web browsers.
And, like, it doesn't really make sense to read the New York Times in an app.
They also have a web page.
It does make sense to read the New York Times in the app you're already using.
And so I think what I'm really excited about is this going from, like, web versus native into, like, an integration between the two because people will, like, they are going to use these apps, but the content, I mean, you know, HTML turns out to be a pretty good way to encode content.
And, you know, if you just re-invent it, you can probably arrive at something, like, either very similar or having just different problems.
So I don't, like, I don't really buy that there's, like, either or.
Apps are a reality. They're great. But they serve a very specific, like, purpose and content works well on the web.
And I think we'll continue to do so.
I was looking at a CommScore report on 2015. Desktop, mobile websites, mobile apps.
And all through we're growing. Desktop, the only real app is the browser.
So the browser is still growing. It's not dying. Desktop, which means laptops for a lot of us, is actually growing, though it's slowing.
And mobile is hot, but there's still not only browsers on mobile that we're growing, according to this CommScore report.
There's the big browser no one talks about.
It's not a great browser, in my view. Facebook has an embedded web view. A lot of people are reading articles in it.
So I think, I agree with Malte on this. We're probably, like, brothers in arms, right?
I'm using Chromium code. It's all great.
And we want progressive web apps that use, you know, service workers. We want things that look like native apps.
We want to blur the line, erase the line. So both of you spent a lot of time talking to publishers.
When you go talk to publishers about what's coming over the horizon in terms of mobile, what do you hear from them?
Why aren't they using AMP more?
I mean, I don't hear from them, I don't want to use AMP.
Like, what we hear from them is, obviously, they see challenges in their monetization.
They obviously have problems, like, publishing to all these different platforms.
And so, I mean, I can't really help with them. Like, here's another one, I'm sorry.
But on the other hand, we've, like, definitely, I mean, on the other hand, we ask them, like, what are the problems you want to solve and try to, like, together with them come up with solutions.
And one thing we didn't mention is that AMP is an open source project and we, like, tightly work together with the people using it to make content to actually change to be great for them.
Brendan, what do publishers say to you?
So, I know some are using AMP, too, but I remember reading about the Daily Beast, which floated with Facebook Instant articles and AMP and then didn't use either.
And I've talked to publishers, a lot of them, maybe it's they get captured by their ad tech vendors, but they want to have things that AMP doesn't support.
They want header bidding or header bidder bidders or wrappers or whatever this latest nonsense that's being sold is.
They really think that they're going to get more yield from it and maybe for a time they do.
So, I know publishers who generally, even if they're big and have a tech team, are a little bit easily bamboozled, frankly, by their vendors.
And part of that is this constant third-party sales cycle of try the new thing for better yield, which means you're putting another script from a different domain on your page, you're probably leaving the old one behind or your contractors are because you didn't have the tech team, you hired some contractors and they're afraid to touch it, which is why some sites have integrations left over that slow them down.
So, I don't want to be too mean to the publishers, but I do think there's a problem of specialization where a lot of publishers just don't have the tech strength to really optimize things.
To put it in a positive light, what I'm actually seeing is that there's the engineers working for these publishers and they want to do a great job.
And we're talking and what we realized is there's what I call, and I think this is not the official way to put it, the middle management problem, right?
So, I have this ad tech thing on my site or this like interstitial and it makes me this much money, right?
And then I realize maybe it's not such a good idea, I turn it off and then, oh, I'm making less money, shit, let's put it back, right?
And so, you need obviously like upper management support at these companies to go over this like hill where you agree, we actually have to make substantial changes that are going to come back positive in the long term and we need to be able to live with short-term revenue impact.
It's the global optimization problem.
You climb the hill and you like the weather and there's a flood coming, it's going to kill you.
And you don't want to go down and walk across the plains and get to a higher ground.
You just feel like every step in the wrong direction hurts your bottom line in the near term.
And so, yeah, not to bust on the publishers, a lot of them are stuck on a hill.
Got it. Okay, just changing gears for a second. So, you both like JavaScript a lot?
Maybe a little? It's 21 years old. It's out of the house.
It gets cut off.
It's on its own. So, there's a lot of developers, computer science students who are tuning in by live stream, a lot of people here who want to learn, you know, what programming language should I really jump into and sink my teeth into?
What words of advice do you have to a budding developer? Is JavaScript a language that they should learn to become experts at?
I mean, I typically say, I get that question asked a lot.
I usually say yes. There's all these things you can do with it.
You know, it runs on robots, drones, and rockets. Apparently, NASA is using it for their spacesuit now.
And then it's in, obviously, there's a great technology called React Native to build native apps.
So, it's going all these places.
On the other hand, if you're in a cold camp today and you just want to get a quick job, maybe you go for a Java Android or C Sharp, but it's definitely a great option.
I definitely still enjoy it, even though it's now a drinking age in America.
Yeah, it started early. JavaScript also has these static checkers.
Like, AMP is cool. I think this is a good thing about AMP.
You can actually run this checker and it'll make sure it doesn't get slow.
JavaScript has TypeScript, Flow, other languages that I could name, long list that compile to JavaScript or that add a little value by adding optional annotations or types to give you warnings.
And that's powerful. I love languages. We're in sort of the second golden age of programming languages.
I was the executive sponsor of Rust at Mozilla, which is independent.
Now Dropbox is using it. And Rust is more like a safe C++ than anything.
So, it depends on what you're doing.
If you're writing systems code, you want C, C++, Rust. If you're writing something for a spaceship or an ad or a browser or a server, you might want JavaScript or something like it.
And JavaScript is getting to the point because it's so widely distributed that it will become the compile target or the go-to runtime for a lot of languages.
And that's called WebAssembly. It's a second way of loading code faster.
Brennan, I love that you used the word rocket ships and ads in the same sentence when describing the power of JavaScript.
That's quite clever. Yes, exactly.
You know, we use a lot of JavaScript here. We have a big team that works on it.
We also use a lot of Go, developed by one of your colleagues, Rob Pike at Google.
Malte, what do you think about Go? Yeah, I'm personally not a fan. How come?
Compare, contrast. Again, there's a lot of people in the room that probably don't necessarily know what's good or pros, cons.
Compare, contrast in your personal opinion.
Yeah, I think Rob is a very opinionated person. I think he has the wrong opinion about error handling.
But, you know, in reality, it doesn't matter all that much at the language level.
You can get, you know, pretty productive in all of them as long as there's curly braces.
Which I think is like, that's, you need, well, basically you need curly braces, otherwise no one's going to use it.
So if you're a language developer, that seems to be a fundamental property.
Okay, I have one more question, and then I'm going to go to questions for the audience.
So start thinking if you have some. And we'll definitely continue the debate internally here at Cloudflare of JavaScript versus Go.
I think that'll be some fun internal conversations.
So the Internet we know today definitely has advertising on it.
And there's some companies who made a lot of money doing that.
What does it look like over the next two to three years? What's over their horizon?
Fewer ads.
I think, here's what I think. Smartphone is such a category killer. Unlike going from mainframes to mini computers, it's just a scale down, or PCs.
But smartphones, like, everybody has one.
It's an action device. It's in your pocket.
You better trust it. And it doesn't have enough screen space to really want a lot of ads.
So I think it's just not a great display, let's say, vehicle. And that's why you're seeing full screens videos and hiding auto-collapsing videos.
Silent movies are back because they mute.
I think Facebook's trying to make them play sound.
That's going to be terrible. So there's a problem that advertising is always trying to shotgun your attention across all the sites you might visit.
Online advertising.
I actually think Google now is going in a better direction. Make something that can predict the information you need when you need it and not bother you otherwise.
I think that would be a better direction than what we think of as advertising.
Malta? I think I alluded earlier to that I'm very optimistic about going about it in a principled, technical way and trying to solve the problems.
I posted this blog post the other day titled What About the Ads?
Because people are always asking me, hey, Malta, you make content fast with AMP, but isn't it the ads that are the fault?
I could say it was really hard to change them and people wanted them.
So we had to support them. So basically, AMP initially went this way of mitigating the issues.
But then we realized, and this is one of those moments where you build some technology and afterwards it seems it's better for something else.
You can argue AMP being a good or bad thing, but it seems such a perfect technology for advertising because you have these pure atoms of stuff that you can reason about what they do.
Currently, third-party advertising, they run some random people's JavaScript on your site.
You don't really know what's happening.
So if you just stop doing that and have something that you can reason about and we can say, I have certain invariants about how it interferes with user experience.
I can, for example, say it's always more important that I can scroll this page than it is important that this ad shows me video.
And if you can reason about it in this way, you can, I think, find a better compromise where you have the unquestionable benefit of free content that's paid for by advertising while still having a great user experience.
I think there is a compromise to be found that we're not necessarily meeting right now.
And it's definitely worth trying to do a much better job.
So here's the rub. I gave you a brief on Twitter about this.
You've integrated like 50 or 60. I lost track. It's on a GitHub file.
You can go read it. It's like a bunch of different third parties that are allowed to be AMP third parties.
And they can do tracking still, right? They can build a profile of the user from site to site.
That's their business. So I mean, as I said, AMP right now supports legacy ad tech in a way that reduces the effect on user experience.
So for example, everyone knows this effect where you go to a web page, and it's really slow.
And you start reading. And then, boom, everything's down because there's an ad popping up, right?
And so we can guarantee that this doesn't happen.
We don't control, and we don't want to control what the ad in particular is doing.
There obviously are some differences in that the ad is never a first party.
If you know about security systems and browsers, that does make a difference.
But otherwise, yeah, I mean, we're trying to find a compromise where we get better on security.
We get better on user experience while retaining existing monetization.
Because you can't just go out and say, we don't really have the new system yet.
But you all have to stop making money now for two years. It's not going to be very good for the publishers.
And unrealistic. Yeah, that's happening, though, with ad blocker adoption.
There's a real race here to the bottom, right? The more people get ad blockers, the publisher has a smaller segment left to advertise to.
And sometimes the ad targeting gets worse, and it just drives them down to ad blockers, and the quality goes down.
I appreciate the ecosystem problem, I call it.
And I think it needs to be solved systematically. The problem is, users, if they have a right to their own data profile, their own attention, they have to hold on to that data, or they won't have any bargaining power.
And that's why Brave is blocking third parties.
Even if everybody were using AMP, and it was all nice and fast, I think we would still be trying to give the user leverage over their own data.
I have a lot of questions about that. But I want to go out to the audience.
Does anyone here have any questions for Malta or Brendan? How has the standardization of JavaScript contributed to either making it a more mature or a faster moving technology?
Standards are slow. I still work on the JavaScript standards body, ECMA TC39.
We're meeting at Facebook in two weeks. Or Netflix, I forget which.
Netflix, then Facebook. We've got everybody. We've got Airbnb, so we've got a unicorn on board.
We have all the browser vendors. I'm pretty sure Netflix and Facebook are both unicorns, too.
JavaScript's getting decent, right?
It's got lots of nice affordances, and it's getting stuff that people have to use tools for right now, building the language.
But I think the level of discourse that AMP works at is really more in the W3C.
And Brave just joined W3C, but I haven't had time in a long time for it.
We had to actually go around it back in 2004 to restart HTML5.
We went and formed the WhatWG. So I have a somewhat skeptical view of standards bodies.
I think what we need is winning innovation and the ability to, this is the good part of the standards bodies, donate what might be intellectual property rights to the standards body.
And then you get something everybody can share.
It's like AMP could be standardized. Well, so I think the concept of the extensible web, which actually goes directly here, right?
So you do low-level things, and then applications are built on top, right?
So AMP, people ask about being standardized.
It doesn't really make sense. Like jQuery and React don't standardize.
They're JavaScript libraries, and we're a JavaScript library.
It's an application layer thing, which doesn't really make sense to standardize because it's just, you know, that's what applications are.
What about semantic markup for ads?
That was something that we passed up a long time ago, and people were doing either, you know, iframes.
They don't do them much.
They do images or divs or script on page. Sometimes you can't tell. Right, I mean, I think the, obviously, AMP is using web components which add a layer of semantics that is not standardized, and it's always obviously, I mean, just because you say something is something that doesn't mean it's true, but it doesn't, in many ways, you can say, like, here's a carousel, or these are tabs, right?
So there's definitely a layer of semantics. I mean, one of the reasons why AMP can do better in the presence of ads, for example, is we know there's an ad, right?
If you just load someone's JavaScript file and they can do whatever they want to your site, then they don't know there's an ad.
It's also hard for browsers to slow it down.
Like, for example, in a third-party iframe now, all the browsers kind of say, that's not as important as the main page, we're going to slow it down.
But if the ad runs on the main page, it's really hard, right? Because at least AMP knows there's an ad, it can deal with it, except instead of, like, knowing nothing, basically.
Yeah, I think that would be good to standardize. I don't mean all of AMP, but the good parts, right?
Like Crockford said. AMP, the good parts.
I promise disagreement. It's definitely good to, like, obviously, to just try things and say, like, what are the things that turn out to be actually a good idea.
Oh, here's another one. What about, isn't Google trying to make scripts that do document write be penalized somehow?
Oh, I mean, they launched a feature.
If you're, like, document write is a technology where you can essentially delay loading a page forever and load an ad on the way.
I did it. It turns out a very, not the greatest idea.
It's used by Google. It's, so, no, so, I mean, well, so, Google, we shouldn't get too deep there.
Basically, what Chrome is doing is saying, if you're on 2G, we're not going to do it.
We just stop. Like, it's called an intervention where you purposely violate the standard by saying, this is better for the user because this technology can lead to page load times in hundreds of seconds.
It's obviously better to load it in 10 seconds because otherwise they wouldn't have finished loading anyway.
And it's now being rolled out from 2G only to 2G-like.
So, if you're, like, on a bad Wi-Fi, you would get the same benefit.
So, I think, I mean, that's actually... By the way, I totally agree. And if you just take that argument and change document that write to third-party ads, same argument.
I'm with you. To be honest, like, this is literally how AMP works.
Third-party ads slow you down dozens of seconds. Because we, so, AMP strictly loads ads after the page is already done.
So... The content's there. It's kind of like that.
So, the ad, we can basically tell you page load time will never be negatively impacted by the ad.
Right? I think that's a very nice promise. And, sure, like, there's still, like, I mean, that's...
Obviously, that's why we're working on it.
We haven't solved all the problems. But I think that's a very nice first step.
Like, progress over the status quo. But you said 19 seconds down to a second.
Yeah. Right? That's a big improvement. And I was totally, I agree. But, like, doing things like Chrome is doing for 2G connections to break the standards a little bit, it fits within, I think, the ambit of web browsers, like, users.
Last question. Sir Tim. Oh, I'll just get you a microphone, Dina. Thank you.
That's what I heard.
Andreasen told me, right? We just won a different mock-up. Okay.
And the next text objects didn't make it easy to put images in for a few releases.
But what I was going to talk about with the mic is payments.
And if your creative person isn't going to get paid through advertising, and I must agree, I think there are actually a lot of people here, they may not be in this room, but there are a lot of people out there who feel that advertising is really decreasing the value of their lives and their kids as well, and they would love to bring up their children without them just having their lives part of the advertising.
So there are a lot of people I think who would follow you.
But you have to get money to the creators.
There's web payments. Hopefully with web payments, maybe people get more creative with W3.org slash payments, just saying.
So hopefully as we get more standardization there and the whole experience about paying for things will mean that there will be, and maybe people will get creative about smaller amounts, paying in smaller quantities, randomly distributing tips.
So certainly I think it's good for people to think in general about alternative ways of getting money to the creative people, the people who have played the songs and written the songs.
And so, because that's important. So Brave 0.12.1 is out today. It has Brave payments.
You can automatically and anonymously micro-tip your top sites.
You don't have to think about it. And we think anonymity is important.
That's hard to do today with money and Bitcoin. We think micro is important.
People don't want to pay a lot per good article. And the fees, even on the Bitcoin blockchain or the interchange charge from your credit card will kill you.
You don't want to give your credit card out. We joined W3C mainly to, I think, work on web payments.
Though, you know, Yanzu works for Brave, and she was on the tag.
So maybe she'll come back. So, you know, I feel like my mobile phone has made me a much more productive person.
And I think a lot of people in this room share that sentiment.
And so we have to find a way to continue to do business online and have the creatives put content online and still make sure that people get paid for their work.
And so thank you for all the work you're doing and the debate and the conversation.
And looking forward to see the results of your efforts. Thank you. Thank you very much.
All right.
Okay.
So after that couple of talks, we're going to get a little bit more technical. I've got with me right now Ilya Grigorik, who's from Google.
And he's very active in the world of web performance.
He's chairing the W3C's web performance group. And he basically wrote the book on web performance.
So I have the book here. So anyone who's got this will know all about web performance.
Ilya, I wanted to ask you, I actually wanted to tell you a story.
Last week I was in London and a friend I've known for years who I'd never seen flew over from Australia.
And I said to him, how's it going being in London?
And I expected him to say something about the weather.
And he said, the Internet is so much better here. And why is that? Why is Australian Internet so poor compared to the U.K.?
It's nothing to do with policy, is it?
Well, I guess it depends on exactly what he meant. But my guess would be, and you can tell me if I'm right or wrong, a lot of it has to do with latency.
And the punchline there is physics, damn it.
So we can talk about, I guess, more about why that's the problem.
And I'm actually curious. What was his experience? So his experience was that websites were snappy.
They came up like this. And so the problem is that Australia is far away, right?
Yeah. So fundamentally, physics, damn it. Speed of light.
Pretty fast, but not fast enough. And unfortunately, we haven't figured out how to make it any faster.
So if you actually look at the distances and you say, well, I'm sitting in London or, say, New York, and I need to travel to Sydney.
Even if you take the fastest path between those two points, one way it's roughly on the order of like 60 to 70 milliseconds.
Now, you're not going to find any cable that's going to be a straight path between those cities.
So you're going to bounce around, add some latency there.
Also, we can't actually approach speed of light.
We're pretty close within a constant factor, 1.2, 1.3 -ish.
You multiply all that out and that round trip to just say, like, hello is 150 milliseconds.
And it turns out that humans can perceive change on the order of 200 milliseconds, right?
So you have to be incredibly responsive. And if you happen to live in the U.K.
and you're contacting a U.K. server, that's a huge difference. And I'm guessing that's exactly what he observed.
Right. I think he was probably contacting websites that were hosted far away from Australia.
And what's strange, though, is that, okay, so it's 150 milliseconds or whatever, and we can perceive that.
How does that add up into a problem when I'm browsing a website? Well, it's not just one round trip, right?
So you send a, like, I want to see this page. The server has to do a bunch of stuff, calculate, compute what it wants to present to yourself.
And then it sends back basically a manifest of things saying, like, here's the text that you were looking for.
And by the way, there's also this beautiful video that you want to see.
You need to fetch this other information for the JavaScript that needs to run, the CSS, all of that needs to compose.
And by the time you fetch that JavaScript, that JavaScript file says, I need to also fetch this other thing.
And then, like, ten layers deep, you're finally resolving that dependency graph.
And two seconds or three seconds has passed, and you finally see something on the screen.
And that's kind of the fundamental challenge of building efficient applications on the web today.
It's just understanding that graph, making it as shallow as you can, and maybe narrower in terms of the amount of data that you have to fetch.
And now over time, right, we've gone from sort of 28-8 kilobit modems to 56.
And now maybe from San Francisco, you've got a very, very fast, big, fat connection to the Internet.
At the same time, websites, at some point they got usable.
They were unusable at some point.
And then they seem to have stopped getting faster. What's happened there?
Why aren't they getting faster? Good question. So networks are definitely getting faster, right?
That's a given. I think our expectations have obviously changed as well.
The websites, back when we loaded them in 56K modem, were much lighter.
But we didn't expect rich video and 4K video and 60 frames per second and super high-resolution images and all that stuff.
So I think you can confirm this as well.
Anytime you deploy a new link that adds extra capacity into the network, somebody else on the network notices that and says, oh, really?
I can double my bit rate in my beautiful videos?
It turns out users like that. And immediately that link is saturated, right?
So what you thought was going to be your extra capacity for the next year is, like, saturated the next day.
And it just means that you had pent -up demand that was not being used.
Yeah, this is a bit like making a highway even larger and more cars say, okay, I can use that, and you're back to where you were.
We see that quite commonly where we add extra bandwidth, thinking, oh, that's good, we need the extra capacity, and it instantly fills up.
And what's actually happening is someone goes to a website, the website now operates more quickly because we have availability, and therefore they click on more stuff.
It instantly fills up the pile.
So users interact more, which generates more traffic.
Because users generate more traffic, the sites are encouraged to build richer experiences, they add more capabilities, and it's a positive reinforcement cycle, right?
But we're always kind of bumping up against that demand.
And it seems to me that there's another thing going on, which is that there's a world which is sort of bifurcating between, let's say, you're sitting here in Silicon Valley with a 4K monitor and a gigabit connection to the Internet, and all the websites you want to visit are in San Jose versus someone who's elsewhere in the world.
Even myself, in London, I typically on my phone get 3G access.
What's happening with that? Well, actually, most of Europe is still relying on 3G.
The chances of you being connected to a 4G network in Europe right now are basically a coin flip.
In big cities, you have a much higher chance. In 3G, not so much.
Or, sorry, in other areas, not so much. But I think maybe let's actually take a step back, right?
So the topic here is 2020, right? Actually, I suck at predicting the future.
I think predicting the future is really hard, especially in technology, given the rate of change.
So as far as I can tell, 10 years out, we're going to have flying cars, singularity, and IPv6 in that order.
But five years out?
Yeah, so five years out is kind of like, it's something I can sort of reason about.
It's like you take some crazy technology that's kind of on the cusp, and you're like, five years out, I'm sure we'll solve all the problems by then, right?
So five years out for networks, I'm like, 5G, yeah. 5G will solve everything. We'll have plenty of bandwidth.
Latency will be, like, zero. Everything will be awesome, right?
And, like, you laugh, but this is the sentiment that I oftentimes get from very well-educated engineers, which is just like, there's a Moore's Law, and things are getting faster.
We'll get more bandwidth. So, yeah, my site's kind of slow today, but if I just kind of sit on it, you know, in a year, we'll get better networks, and everything will be much better.
And they're not wrong, except that they are.
In five years, things will be a lot better, but they'll also be the same.
So, and what I mean by that is we'll deploy more networks, more denser networks, right?
So we'll work on the last mile connectivity. So we'll bring more bandwidth to the edge, if you will.
So you'll have more reliable experience. We will also kind of clamp down the latency a little bit more.
As we already said, we're already kind of at the cusp of what we can do with physics, so it's not going to be a dramatic change.
4G gives you 50 to 60 percent, or 50 to 60 millisecond latency with mobile networks.
We're going to be in that range. So those things are not going to change that much.
That's all good. That's going to be improvements. At the same time, as we heard in a previous talk, we're going to bring in a lot of new users online.
Depending on the reports you read or projections, it's a couple of billion users.
They're going to be coming online in markets which are still dominated by 2G.
Actually, by 2020, many people don't realize this, but 2G is still the most prevalent network type in the world.
By 2020, we're projected to have some turndown of those networks within the United States.
But a lot of the other markets will still heavily rely on 2G.
We'll also have balloons and planes and other cool and crazy projects flying and enabling connectivity in those areas.
But guess what?
Those are going to be high latency networks. They're going to be like your friend in Australia or worse with 300 millisecond RTTs.
So 2020 is going to be much better.
You're going to have really good performance in some areas. But you're also going to have a lot of users which are going to be subject to the same conditions that we have today.
And that's kind of the performance inequality gap. Or maybe another way to describe it is the dynamic range of performance.
It's like we keep pushing the high-end performance higher and higher and higher.
More bandwidth, closer servers, lower latencies.
But fundamentally, we're still stuck.
We're making advancements on the low end. But due to cost reasons, due to other reasons, those things are not going to change due to physics, unfortunately.
So we need to figure out how we're going to build great, reliable experiences that span that range.
Where you're able to get really high throughput and you can view a really awesome video.
And then 10 minutes later, you're in some other location in the tube.
And the connectivity is not so great because now you're sharing it with 150 other people or more.
And at the same time, our expectations continue to get more and more difficult.
We expect to be able to get that Uber to be in the app instantaneously and the websites to be rich.
And I think that's the challenge of many modern applications because you deliver that great experience once.
The user sees it and they're like, this is great.
I want more of it. And once they've seen it, it cannot be unseen, right?
Because it's like, well, I had that experience and I want it now.
And it's up to us to figure out the architecture, both at the network layer and at the application layer.
And the protocols in between for how to enable that.
So let's talk about this. Let's get a little bit more technical.
Let's talk about the protocol. So recently, HTTP, the protocol used for the web, has been upgraded to a thing called HTTP 2.
Yep. Tell us about that.
What does it do? So HTTP 2 addresses a lot of the, I want to say simple, but maybe not so simple, a lot of technical limitations in the previous protocol, which was designed for an era where we were fetching documents.
And the document consisted of mostly text and maybe an occasional site resource like an image.
Today, we still deliver pages, but we deliver applications mostly.
And applications are composed of many different resources.
This comes back to our resource graph that you need to fetch.
And the protocol is inefficient in how it fetches that.
So as a browser, we have to fetch anywhere between 60 to 100 to several hundred resources to compose your average page.
That requires making a lot of connections, making a lot of handshakes, doing a lot of round trips that are unnecessary.
So HTTP tries to address that by just coalescing more of the things into fewer number of connections with a little bit better compression, and it's an incremental improvement.
As a user, you wouldn't even see the difference.
As a technical user, you have to update your server and the client, and hopefully things work better.
And it turns out that now that this technology has been rolling out for about a year, so the standard went to RFC in May of last year, I believe, right?
And so it's been over a year. And depending on your application, you'll see improvements anywhere from a couple of percentage points to 30 or 50%.
So on some Google properties, we've seen significant improvements in performance by just switching that protocol.
And how do you go about something like that?
I mean, that's fundamentally a bit of the Internet being changed, right?
Right. So how does that sort of roll out occur? Well, I think for SPDY, so yeah.
So HTTP2, and then there's a SPDY for those of you that follow that whole kind of arc of development.
So at Google, we started a project called SPDY back in 2007, 2008, those kind of early incubation, and started working on ideas for how we could improve HTTP.
After about three years with some experimental experience, we actually put it into Chrome.
We built the implementation. We built the implementation on our Google servers, on our GFEs, and we were able to gather data from out in the wild.
It turns out that for a lot of the stuff, due to the complexities of the real world and the networks, it's very hard to build a model.
It's easy to build a model, but most models are incorrect because they don't capture the wide variety of weird and interesting behaviors on the web.
So once we had that data, we saw that there's significant improvements.
So we engaged IETF, and kind of through a collaboration there over the next couple of years, we arrived at HTTP2.
So to answer your question, I guess it's that incremental release, and I guess the next step there is actually quick, which is, so Jana, I think, is going to talk about that later today.
It's effectively HTTP over UDP, and we're following the same path there.
Incremental deployments, trying to learn, and trying to accelerate that rate of learning.
There's another thing I think that's gone on when we've gone from the 56K world to where we are today, which is that during that modem world, it was quite common for websites to fail because your Internet connection didn't work, or someone picked up the phone and tried to have a phone conversation, and the modem dropped.
Then we got Internet connections that actually worked most of the time, and we got used to connectivity being there, and then we moved to mobile, and now there's a strange thing going on where websites on a desktop machine might work perfectly because you've got a connection which is very, very good, and then you go on the mobile device, and you're in a kind of hairy environment where the Internet can disappear.
So how are we going to deal with that?
Because I think that's another web performance issue. So there's two problems there, right?
One is the variability in performance, which comes back to the dynamic range.
So how do I deliver an application that is reliable in terms of the experience that I can provide to the user?
Like when I click that icon on my launcher, I want, regardless of what my connection type, whether I'm offline or offline, I want some feedback.
I don't want to sit there for a minute and then for you to show me the dinosaur page.
Terrible experience, right? And also, even if you're connected, so in Chrome we track failed navigations, and I think we're talking about this backstage.
Those numbers are really, really scary to me. So when we look at the failed navigation rate, we see about 10% failed navigations for users that are on 2G networks.
So 1 in 10 times they go to a page, it simply doesn't work.
Right. They may wait for a period of time, anywhere from 10 seconds to a minute, and they'll get the dinosaur page, which is terrible.
On 3G, it gets a little bit better.
It's somewhere between 3% and 5%, depending on which region you're in.
But even on 4G, it's 1%. So 1 in 100. So that's just the reality of the network.
Sometimes you're just offline. And the first time I saw these numbers, I was shocked.
And you may be thinking, oh man, those Chrome guys, they don't know what they're doing.
10% failure rate. But then actually if you dig into the statistics for the reliability of the different networks, so OpenSignal, if you guys are familiar, does some really interesting reports into reliability of networks and doing studies in different geographies.
They have really interesting data that shows, the way they measure it is they have an app on your phone, so you can install it as a consumer.
And periodically they just wake up and they just ping their own server to see, like, am I up?
What's the RTT? And they find that on, say, in UK in particular, on 2G for Vodafone, and I'm not picking on Vodafone, this is just an example that pops into my mind, the reliability rate for 2G is about 89%.
So that's 11% of time. The device says it's connected, it's not that it's offline.
It says it's connected to a network, but it is unable to make a connection.
So being offline is not an exception, it's the rule. Being slow is not an exception, it's the rule.
And you have to design your applications to kind of work with that.
Now, I think that's an interesting point about designing applications to work with that, because it seems to me that we've got a bit of a missing piece here, which is how do the software developers who build web applications actually build stuff to cope with this situation?
Yeah, and we've had a couple of attempts at this.
Some of the early incarnations were like Google Gears back in 2006, 2007.
Then we had Application Cache, which was another project.
And now we're kind of doing the third take, which is service workers. So the idea there is to enable new primitives in a browser that will allow you to take control over each request as it flows out of the server, sorry, out of the client, and implement your own logic to say, hey, I'm making a request for this image, but I'm offline.
How do you want to react to that? So today that would just fail, and you would get the dinosaur page.
But there, now that you can intercept it, you can actually say, well, let me reach into my cache, which I also have control over, pull out maybe a meaningful error page, or pull out a previous version of the article and just give you that and notify you that, hey, you're offline, but next time you go online, I will sync new data.
So you can provide a resilient experience even despite that network.
And where it starts to get really interesting is not only just the offline case, but also kind of the timing case of the latency tails.
So say you want to provide a reliable response within 500 milliseconds, but you just so happen to be on an overloaded network.
You can actually observe that the request has gone out and just start a timer and just say, after 500 milliseconds, I'm just going to say, I'll let it proceed, but I'm going to respond to the user and say, like, it's taking a while, here's the thing that I have right now, I'll update it later.
So now you as a user have a predictable experience, which is something that's been missing.
It's a critical piece. So it seems if we try and project ourselves to 2020, we're going to have very fast networks in some places, still have the speed of light.
We haven't managed to break the speed of light yet, so that's still going to be there.
And perhaps the big gap is that if we do something today, perhaps we should move all the developers from Silicon Valley into the boonies somewhere where they have bad Internet and they'll come up with a programming solution for making these applications.
The other thing is that there is hope for Australia because it's actually drifting northwards in a few seconds.
So it is actually getting closer. Right, two inches a year, right? Two inches a year, but still, if we wait long enough, it'll actually be...
In, like, 10 ,000 years, we'll shave off a couple of holes like it's good Internet.
All right, well, thank you so much for chatting.
I would love to open it up to questions from the audience about web performance.
Do we have a question? Yes, there's a question right in the back there.
Hi.
You talked a little bit about IPv6 and HTTP2. In 2020, what protocols that we take for granted now will have to be re -engineered?
That's a great question. I'm curious to hear your thoughts here as well.
So I think IPv6 will actually be a thing, like, for real this time.
We've been saying that for 20 years now, but I think it's true because, first of all, actually, in the United States, or in North America, I should say, most of the mobile traffic is already over IPv6, more than 50% as of this year, which is really encouraging.
So it turns out that mobile networks really want IPv6, and that's the primary source where that growth is coming from.
So I think that's going to continue. Also, if you look at the projections or at the growth rate for IPv6, just in terms of overall adoption, so at Google we track this and we have a public dashboard, over the last three years it has doubled every year, which is to say it went from 0.5% to 1%, but at this point it's actually at 10%.
And if you just project that out, I think in five years we'll be in a pretty interesting different place, and that will enable a new class of applications, which is nice.
I'm really hopeful and optimistic about the efforts with QUIC and UDP in particular.
So TCP is great, it is the foundation of the Internet, but it is also somewhat of an ossified protocol that is not able to change as rapidly as we want, and I think many people will argue that work with mobile networks, that we need to adjust faster and in more interesting ways, better and more interesting congestion control algorithms and other things, and I think that's what QUIC is bringing to the table.
If you look at the last two years of our work on QUIC within Google, we've been able to deploy 20 or 30 versions, iterations of the protocol.
In the meantime, there has been zero versions of TCP released.
So we're able to innovate at a much faster pace because the entire network stack lives in the client, we're not dependent on the operating system, which is the ossified layer, to actually rev the whole thing.
So the rate of learning will be very interesting, and I think this will enable more interesting real-time applications, video certainly.
And all the rest.
So my quick answer to that would be that I think that in five years' time, people using unencrypted protocols will be pariahs, basically.
So what are you doing?
Why are you using HTTP now? Or why are you using something that's unencrypted?
I think that's the big change. We see it now. We see about a quarter of the requests that go to our sites are encrypted, and we believe that in five years' time, that will flip and it'll be unusual to not encrypt things.
Security is one thing that we have not touched on, but that's definitely a big area of focus for everybody.
Is there another question? Yes, there's one down here.
Maybe it does come into a more obvious question about not being able to bribe God, as Dave Clark would say, about being stuck with the latency we have.
If you look at applications becoming more interactive, and I'm thinking of within this 200 millisecond, 100 millisecond Twitch, just AR.
At what point do you expect that there's actually a distribution or refactoring of applications that have to take place and we just go accept that we've got to start moving things to an edge or a more distributed edge infrastructure, which is part of the answer for Australia?
I think it's already happening. It's already a thing for, if you look at any large organization, I can speak for Google, right?
We have lots of users in Australia. It's effectively unacceptable for us to incur that round trip to Asia or to North America for every TCP request, so we move data closer and we then replicate it.
We move data into Australia or as close as we can and that's exactly the service that any CDN will offer.
So I think that's already a best practice.
I imagine that it will continue to become hopefully easier for many other smaller applications to deploy and Cloudflare's doing a lot of great work in that space.
It used to be a very complicated thing. Now it's basically a sign-up form and a button.
Hopefully in the future it'll be even easier. So, yeah. Ilya, thanks so much for coming, joining us this morning.
It's great to have you on stage.
Thank you. Applause Music Cloudflare Stream makes streaming high-quality video at scale easy and affordable.
A simple drag-and-drop interface allows you to easily upload your videos for streaming.
Cloudflare Stream will automatically decide on the best video encoding format for your video files to be streamed on any device or browser.
When you're ready to share your videos, click the link button and select Copy.
A unique URL can now be shared or published in any web browser.
Your videos are delivered across Cloudflare's expansive global network and streamed to your viewers using the Stream Player.
Stream provides embedded code for every video. You can also customize the desired default playback behavior before embedding code to your page.
Once you've copied the embed code, simply add it to your page.
The Stream Player is now embedded in your page and your video is ready to be streamed.
That's it. Cloudflare Stream makes video streaming easy and affordable.
Check out the pricing section to get started.
♪ ♪ ♪ ♪ ♪