(audience applauding) - [Ilya] All right.

- [Tim] Thanks, buddy.

- Thanks Tim, I'm glad to be here.

And at what points during the year, I'm sure all of you guys have this experience, we all work for businesses and at a certain point, you know, the next year rolls up, and the planning emails start landing in your mailbox. And you're like, you gotta think about the next year. And this year it really took me aback, because for me, 2020 has this magical property. Maybe it's all the sci-fi novels that I've read growing up or the movies that I watched, but 2020 is really the event horizon for many things. 2020 is where we have flying cars, and when we have flying cars, web performance is no longer a problem because we figured out flying cars.

And really here at this event, we're here to celebrate the end of our journey of web performance because we've solved it all. (audience laughs)

And, you know, as I was preparing this talk, I was looking at this picture and it dawned on me that flying cars are probably a terrible idea.

I mean, what could go wrong? You have four high-performance blenders attached in front of a projectile. (audience laughing)

Anyway, so I'm still holding out a ray of hope that tomorrow all the speakers will tell us all the secrets for how we solve web performance. But really, I end up oscillating between these two states. On the one hand, I am often appalled at the user experience on the web, but then in the same breath, when I take a step back and look at some of the things that we've been able to achieve over the last decade, and to some extent, I've worked on web performance for the last decade more or less full-time, it is actually quite astounding, of all the things that we've been able to deliver, just an incomplete list, mobile-friendly.

Like 10 years ago, we didn't even think about, like we had a desktop web, and all of a sudden we have a mobile-friendly web. And that's an amazing transformation.

We have HTTPS effectively on by default everywhere. Over 90% of navigations are now secure.

That used to be less than five or 10%, you know, 10 years ago.

We have things like HTTP/2 and HTTP/3.

Like we literally swapped out the delivery protocols on the entire web.

And to some extent, or to most extent, no one even noticed.

And that's a good thing.

I mean some things got faster.

Then we have things like Wasm.

Like, now we have this low-level bytecode format delivering amazing things, as you guys heard this morning.

So this is all amazing, this is all great, but then you get moments like this.

I opened my dev tools and I do the audit and I'm like, "Dude, what is going on? "We have WebP, we have the case, "I told you what to do to optimize images.

"And in fact, if you read Steve's book, "it told you to optimize images 10 years ago." And I had a good laugh because I put this text with the marketing before I came to the event but we had, you know, some shout-outs to the marketing team at the event a few times now already.

So, like, what is going on? Because clearly, we've made huge progress in some areas but we have not been able to move things at scale in some other areas.

So why is that? Why is it that we're rejecting some of these performance best practices? And just to put a finer point on this, here's a headline from earlier this week.

So Google launches a new Stadia service that you can try out now and it delivers 4K resolution, 60 frames per second gaming in your browser. Low latency gaming in your browser, right? This is effectively taking your console and running it in the cloud.

And it is running in your browser.

So we're able to deliver this high-resolution imagery at 60 hertz.

At the same time, why is it that we can't ship a single damn frame in under one second (audience laughs) on most of the web? And while we're at it, let's actually apply gzip to all the resources 'cause that's still a huge unsolved problem. And yes, let's optimize images as well, right? There's some discrepancy in these two things that we need to figure out.

So let's dive in into the images in particular. This is an area that I've been thinking about and I want to welcome you guys on this journey with me. So what are the potential problems and pitfalls that prevent us from fixing this problem? It could be that there's just lack of knowledge, or maybe lack of tools or lack of automation. And I think the answer is we have all these things, right? We have books written on image optimizations, we have tons of articles, we have talks, even at this event last year we had an amazing talk on image optimization. So it's not like the knowledge is not there. If you want to seek it, you will find it, and find lots of it.

We even have lots of tools.

People have invested years and decades of time into building better libraries and things that will make your life easy to solve this problem, if you want to.

We have dedicated CDNs that will gladly take that problem off your hands. And yet, for some reason, we're still, for the most part have not have not solved this problem.

So there's something else missing.

And here's an example of this problem.

So Web Almanac is a new community-driven resource that has been put together and announced last week. And there's a media chapter in it that looks at, if we crawl the web, so this is using an HTTP archive data, and run the Lighthouse audits on it, one of the things that Lighthouse does is it actually it takes the image, tries to optimize it and sees how much could you save if you actually optimized it. And basically what this graph shows you is that at p75, so that 75th percentile, if the sites actually optimize the images, it would save an estimated 1 1/2 seconds for that for that site, in terms of load time. And at p90 you would be over five seconds.

So clearly there's some room for improvement here and we're not really making a dent in this problem. So then the question is really, it's not about the tools, it's not like we don't know how to optimize images, it's the lack of incentives.

Why is it that sites are not adopting this best practice by default? Is it lack of awareness? Maybe.

Maybe it's lack of case studies, right? Perhaps.

Is it user empathy, because we're just ignoring the user and we actually don't care? Is it the privilege of developing on a desktop computer and having a fast network that we lack that pain of having to wait so long? Or maybe it's just too hard.

There's so many reasons for this particular problem. But I think the key observation here is, it's a question about behavior.

It's not a question of technology.

So we are asking a question, whose behavior do we need to change and in what way? And I think it's really important for us to consider who are we trying to change and actually break this problem apart for different segments of the web.

And this is a big leap.

So these two questions, I think, are often used interchangeably.

In fact, we often have conversations where we mentally swap these two formulations of the problem.

But they're actually completely different problems, because if you focus on one, how do we improve image optimization on the web? We can geek out for hours, we can have an entire conference on image optimization, what is the best encoder, what is the best level of settings for each encoder, what is the best format? Is it progressive or should it be something else? Should you do some other optimization? And that's all great.

There is definitely room for improvement in the existing tools.

But the problem's actually not improving the existing optimizations, it's actually getting the optimizations employed at all. And that's the second question, which is that question of behavior.

Why is it that we can't get people to pay attention to the stuff to begin with? And on a second, or third actually, formulation here that I want us to consider is use optimization across the entire web.

So not just in your particular context, but what can we as a community do to drive adoption of these best practices across the entire web? That, of course, begs the question, is what is the entire web and how do you actually break that apart? One useful way to look at this, this is some interesting data that was shared at BlinkOn last week.

Rick Byers works on Chrome and he shared this really interesting analysis which effectively breaks down to, if you look at the origins that our users visit in Chrome, about 30% of the page loads are coming from the top 100 sites.

So that's the head, there's a few sites, small number of sites, that drive a lot of traffic, which intuitively makes sense.

About another third comes from the next 10k sites, so that's the torso of the web.

But the remaining 30% comes from a very long tail, millions of sites.

Don't over-index on the actually number of millions, whether there's three million or five million or 10 million, the point here is that we have this wide distribution. And each one of these segments intuitively, I think, it makes sense that they have different business models, they have different engagements.

You can bet that if you're in the top 100 you probably have a dedicated engineering team, maybe even a performance team, hopefully a performance team or somebody thinking about performance.

The economics of the torso are very different, and the tail is actually dramatically different. And driving the adoption of best practices at scale requires that we actually have a distinct strategy for all three segments. So really it boils down to this, if we try to decompose this problem, how do we drive adoption of image optimization in the top 100 sites, which is a different question from the top 10k sites, and finally the three million sites? And I wanna focus on the tail, because I think it neatly demonstrates how different we need to think about the web and how do we optimize, beyond images as well. So this room is not the tail.

You don't just randomly walk by an event and decide to walk in and learn about image optimization because you happen to have a website.

You guys self-opted in into this community, which tells me that you are likely in the top or the torso of the web.

You care about the web.

Most creators, most people creating sites and pages on the web are not performance experts, they will never be. They shouldn't be.

They just have a simple job to get done, right? They wanna get a small site online that describes their business or maybe they want to document something.

And none of the discussions that we have here will ever reach them.

So we need to think differently about how to enable that particular audience. Of course, one observation here is, well, much of the tail actually relies on some common infrastructure, right? Like those same users don't just buy a dedicated server and set up PHP and then install their own WordPress. Some do, sure.

You're probably a developer.

Most turn to turnkey solutions, they'll go to GoDaddy or some other hosting provider that makes it simple to get a site up and running online. So one point of leverage we have on the tail is these hosting providers.

So let's write a nice letter and sign it and send it to the hosting providers saying, "Look, it's really good for the web "if you were to optimize images.

"That would be just grand "if you could just enable images optimization." And in some shape, way, or form I've had this discussion at least a dozen times with different hosting providers in slightly different context, but even this particular discussion.

Turns out, it doesn't actually work that way. It resonates with some of the performance people at that hosting provider, but you have to remember, you have to think and acknowledge how the business actually operates.

And the business of the long tail is actually very, it's a thin margin business.

It effectively boils down to how many users can I put on a single server because I wanna lower my OpEx. It's the fewest numbers of CPU cycles with fewest number of spindles and the least memory footprints that I can allocate to that user.

So whenever I come and ask them, "Hey, could you please spend some CPU cycles "to create more variance of the image because it'd be responsive.

"That's like awesome.

"And yes, it's gonna take a bunch of RAM "to optimize the images." What they're translating to in their head is, "Okay, this is gonna be a disaster on my bottom line. "So I feel good about making the web faster, but it doesn't actually align with my business." And for that reason, that message actually just doesn't stick.

So then we come to the next thing, which is the next obvious rebuttal to this, we have free CDNs, we have free services, why doesn't the web just adopt it? Well, that's a different problem.

That's an awareness problem, right? Most people that are trying to get on the web, just get a website online, are not technical.

They don't understand what a CDN is, nor should they, to be honest.

And they just wanna get an image up on their damn webpage. And most of them struggle to get an image on the webpage. That's a whole different problem, but they're not thinking about CDNs.

So we're constantly losing at this game, because new people are creating content on the web and they are just opting in to this bad behavior, which is not using CDNs, not using any image optimization services.

And their hosting providers frankly are not set up to handle that cost.

So here's an interesting question to ponder, how many of you are image optimization experts for Facebook or Instagram? How many do you know? There are many content creators that go to these services and publish regularly, but there's no web performance industry around Instagram and Facebook.

Like, this problem is not a problem on those platforms. Why is that? Well, their economics are different.

They have a giant advertising flywheel which is able to underwrite image optimization, optimize delivery, and deployment of new formats. They take that entire problem and solve it because they have another different way to fund the entire user experience.

And they compete on user experience.

They know that by delivering better image optimization, they have faster load times, better retention, better engagement, which translates into more visits, and then just continues to turn that flywheel of advertising in their favor.

That's not how the tail of the web works.

So we need a different solution.

The open web tail lives and dies by the OpEx margin. So we need, we can't compete with that, we can't change those economics.

The torso, interestingly enough, thrives on integration and upsells.

So you can turn to hosting providers which will gladly offer you additional services for additional fees, and they will deliver on the optimized experience, whether you're trying to host a WordPress site or your own site or your own stack or something else. So these things exist and that's great.

But money matters, right? And I think we often ignore that problem.

We think of it as a technical problem and lack of tools but the finance and the business part of the ecosystem is a huge factor.

So my claim is that if we actually want to solve this problem, if we want to get to 2029 and stand on stage and say, "What have we accomplished in the last decade?" I'd love to be in a place where we could say, "You remember back in the day when we had to hand roll WebP optimizers and that like that? "Yeah, those were the days.

"Nowadays we don't have image optimization problems." What would it take to get there? My claim is, it needs to be free.

The number of choices for the user should be zero. You should never have to educate anyone on why image optimization matters.

Most people that put images online, whether that's your marketing department or someone just setting up a site, should never have to make that choice.

So as platform builders for your own brands, I think that's something that you should focus on, which is prevent people from making the bad choice in the first place. If you're trying to police them after the fact, you will never win that game.

And finally, the tools must do the work for you, right? So this gets me to the last point, which is I think we actually need to have a different strategy, it's knowledge, tools, automation, incentives are prerequisites, those are all necessary components.

But we need to have a conversation about different strategies for deploying each one of these things in the torso, in the head and the tail.

It's a different playbook, if you will.

And that means understanding the actually stakeholders. Who are the users that live in that particular segment of the market? What are their needs? What are the business models of the stakeholders that support them, whether that's a hosting provider or somebody else? And what are the differences in the tools? Maybe the folks in the head are willing to spend more time tweaking the knobs and shifting quality settings, but the folks in the tail don't know, don't care, and never will touch that functionality.

So the question is how do we actually activate the tail? Like, if we stick with the image example, what would it take for us to change the dynamics of this industry? And one of my hypothesis currently is that we actually need to put the agent back into the user agent.

And what I mean by agent is put the smarts, make the user agent smart, helpful, and something that has your back, something that helps you along the way.

This is an interesting insight for me in particular because, and I think many people here will be familiar with this, so the Extensible Web Manifesto came out maybe five or six years ago, maybe even longer, and the observation here was, as browser vendors, as browser engineers, generally when we try to tackle a very high-level feature, we fail, because the behaviors are really complicated, we can't provide all the configuration options. So really what we should do is offer low-level primitives to the web platform that allow you, the developers and folks building websites, to compose your own variants and prototype rapidly. And then maybe once we find the definite cow paths, we just kind of pave them in that direction. So the gist of the Extensible Web Manifesto is we should focus on low-level primitives.

We should try to explain existing features. So if you see some high-level thing, for example, the video tag, a video tag is a baked-in web component. So web components are basically trying to explain some of these form elements and other things that are just shipped by default in a browser. And I think that's generally good, I mean, there's a lot of really good insights into that. In fact, we should prioritize all this work above all else. We should not try to prioritize building high-level features, we should try to explain it. And I drank that Kool-Aid, and I believed it, and I still believe it, but I think there is some consequences that we missed out along the way.

Implicit in this is that we're actually leaning on developers to drive adoption.

There's nothing that the browser can really do to force adoption, it really is about activating each site on it's own, which means that the existing web cannot just get those features for free.

And oftentimes we lean on users to make the informed choice.

So as someone that's composing on your website, I need to make an informed choice on which framework I should use, what are the attributes that I like about it, and then somehow navigate through that forest of 1,000 choices to end up with the right outcome. Does it have the right image optimization built in? Does it have the right bundling? Does it enforce all of these other things? It's a very tough matrix to get yourself through. So no wonder people are struggling.

And a very concrete example of this that I was reflecting on currently, is back in 2015, there was a proposal for, on WICG for, "Hey, let's add a standard way "to lazy load images.

"And in fact, why don't we put that "into the browser by default.

"Like, why shouldn't the browser "just by default lazy load all images?" On the surface that makes a lot of sense.

That would actually help not only, that'll help the entire web, I think is the actually observation.

The web as it is today.

And (laughs) I wrote a long essay on why I think that's a terrible idea.

I clipped the relevant parts, but effectively it boils down to, this is a very complicated problem, lazy loading is very complicated.

Like, you have different behaviors across different websites, how much you preload, what conditions trigger preload, this is just a very gnarly thing that we as a browser probably will get wrong and upset people.

So instead we should focus on low-level primitives, like we should have Intersection Observer API that tells you how far the element is away from the viewport, or if it's in the viewport. We should have resource hints and preload so you can preload stuff.

If you want to trigger on whether the user is trying to save data, we should add a hint for that, and we should have net info, and it's, like, lots and lots of these components. Good news is, we actually shipped all of these things. They exist.

So you can in fact build very powerful lazy loading capabilities today on the web.

The bad part is, how many websites do you go to actually use lazy loading effectively? Very few.

So something went not right there.

Like, this is necessary, but this is not sufficient I think. So this is probably a very clear statement of denial. (audience laughing) so I was, (laughs) to defend myself, (audience laughing) I don't think I was wrong in that we needed to have low-level primitives, I think that is actually correct.

What I was wrong about is saying we shouldn't do the high-level thing, because what I focused on was the segment of the market where developers were actually smart enough, motivated enough, and aware enough to build a solution that makes sense for their particular use case.

And the general observation here is, I think we've actually kind of over-indexed on low-level primitives.

I think we need to put the agent back into user agent to provide services by default that help most web users to move this thing forward. And we should do that with equal urgency, so in that particular example on the previous slide, we should have ran both things in parallel. There is no reason why we should have stopped at default behavior from advancing at the same pace as all of the low-level primitives.

So here's my formulation, I think, of what we need to do to fix the images problem as we move forward.

The only way we're gonna fix it, especially if you wanna fix it in the tail, is we have to fix the problem before it happens. We have to optimize the image before it leaves the device, and get it to the storage system in somewhat optimized form, not optimal form, but somewhat optimized form. What I'm thinking about is, hey, you just uploaded a 20 megapixel image into a thing that needs a 400 x 400 pixel preview, very common on the web.

And that's what we need to fix.

Now, when I shopped this idea around, I got horror in the eyes of many engineers, and I think it's because I actually went to folks that work on images.

So their first reaction is like, "Oh my God, this is such a complicated space, "you're gonna tell me that you're gonna do magic? Well, what does that mean? "Because I have 15 settings that I need to tweak "for my particular configuration "to get the optimal performance out of my images, "and I don't believe in this magic that you're gonna do." And I think that they have a point, and we need to provide a path for both.

And here's a very good example.

If you're an iOS user, you may have seen this if you tried the developer previews.

So iOS recently shipped the new intent on their platform where if you go and do the file upload and you pick an image, they actually bring up a selector box that says, "What image size do you wanna upload this in?" This is not a site implementation, this is just a native UI implementation, and they offer you four options, actual size, small, medium, and large.

I think this is actually amazing.

My first impression actually was like, "Well, what does small and medium mean?" it doesn't actually tell you what it's about to do. But of course that's a question I ask as an engineer who thinks about image optimization.

But on the other hand, I think this may actually be the most impactful thing that we may have done, and I'm gonna take credit for the work that iOS did here. This may be the most impactful thing that we as an industry have done moving forward so far, to fix the problem of unoptimized images.

Now, I think this is a huge step forward, but it's not sufficient either.

I think we can actually do better.

So myself and a few others have been thinking about how we wanna approach this problem and we have a proposal out.

No code has been written, but the observation is, we actually have an input type file and we actually allow you to specify what image types you're willing to accept.

So you can say, "I only accept JPEG images." Let's build on that.

That's an actually low-level primitive that exists today and we could probably extend. And what would that look like? We could provide a native UI, kind of intent UI, similar to iOS that allows the user to select some sort of options, but we need to give sites control, because what does small mean? I'm not sure.

It would be nice if the site was able to express some sort of preferences.

So if I'm building a photo storage service, I could have one set of settings that is different from, I'm just looking for a 400 x 400 pixel profile mugshot for my social network.

So hypothetically, what could that look like? Well, we know that one of the use cases is translating into different output formats.

There's, in theory, nothing that stops the browser from accepting a certain file type and then outputting another file type.

That is actually a very common operation that is done the moment you upload an image on the server, because you want to standardize what format you store images in.

That can actually be done by the browser.

So you can specify, perhaps, even a preference order, so if some browser does not support WebP it can put it in as a JPEG.

Hypothetically maybe, may be interesting.

Another one could be, well, we know that there are requirements around file sizes. We've all had that experience of coming to a website and it says, "Please upload an image here, "but make sure it's no more than 800 kilobytes "and 400 x 400 pixels," at which point you start to tear out the hair on your head, 'cause, like, what do I do? Install GIMP? (audience laughs) Actually, the next thing that you do is you start Googling for web service to resize image, and you end up on some shady website.

That's actually what happens.

This is a terrible user experience.

So let's try and fix that.

Why can't we actually say that, ideally actually, I want an image that is at most 800 kilobytes. Maybe they allow some very high-level settings for the quality that you're looking for, so in that trade-off optimization of when the browser is making, should I spend more CPU cycles versus less CPU cycles, we can allow you some very high-level behaviors. But by default, we would default to auto and just say let the user agents figure out the best approach.

What about output dimensions? We could ask, or we could allow you to specify what is the actually output that you're requesting? If it's 400 x 400 pixels, is that what you need? Great, let's give you that.

Of course there's some really tricky cases here, which is what if you have a different aspect ratio? So you're looking at a 1:1 aspect ratio? Well, the horror of like, what if the browser actually provided some native modal UI that allows you to crop things? It's not crazy, other platforms have done it. This is normal, users have, they know how to use these tools.

And we have this magical machine learning thing that I keep hearing about, right? So maybe the user agent could actually just even position the box on the most salient element and just allow the user to say, "Okay," and proceed. Or maybe you can just automate it entirely. I'm not sure what the right design space for this thing, but I think that's the direction that we need to be thinking about.

It turns out that while we were focusing, well, I've been approaching this problem from the viewpoint of the tail, the torso and the head of the web actually wants this problem too, or wants this problem to be solved too. So for example, there was a recent case study from Twitter where they implemented a native downsampling, or they polyfilled a native downsampling in a browser for image uploads when you use their web client, and it had double-digit improvement in the rate of successive uploads on their service. Meaning if you don't upload massive images, more images make it to your servers.

That's kinda nice, right? Users are happier because their images are actually making it there.

And if they're experiencing their problem and they have to go out of their way and build this entire pipeline, chances are most other people are not doing it. So this will help that, this will help everyone, but tail in particular.

What about video? That's a really interesting question to think about, because on one hand, it's actually very similar to images, a teensy bit harder, the economics of transcoding video are even more complicated and much more costly. But good news, there's some actual proposals being worked on.

Some folks at Microsoft have been working with a couple of services.

And there's an API being designed for very simple manipulation of video, so things like slicing, concatenating and doing other things.

So if you imagine building a client where you need 10 seconds of video because you're trying to build a story interface, similar to, let's say, Instagram or Snapchat, today there's no way to easily clip 10 seconds of video. You have to upload a two minute version of 4k resolution video to the server, which then has to cut those 10 seconds, that's crazy. So here, we should have an API to actually load the video and allow you to slice it. I think a really interesting space to explore is, how could that look like in a declarative form? And not just a JavaScript API, but also extending file inputs to say, "You know what, what I really want is to upload a video, "but it needs to be, at most, 10 seconds." So if you ever used Instagram and try to create the story, they have a really nice UI flow where you pick a video and it immediately drops you into an editor which already clips the 10 second timeline. And you can just move the timeline.

Why can't the web offer that? That's a very simple operation.

Coming back to native image lazy loading, this is something that I've rallied against before, and in retrospect, I think I was wrong.

So I think it should be a thing.

I'd like to try to make it a thing.

Hopefully some of you agree as well.

It is a thing, actually, in Chrome for Chrome Data Saver users.

So if you enable save data option in your settings, Chrome will implement this for you and then try to lazy load images.

But what about just making it on by default? There's definitely some tricky edge cases there, because it will affect some of the telemetry, some of the implementations.

So this wouldn't be an easy thing to pull off, but I think we should try.

So with that, I wanna put this question out to this group, and I'd love to engage with you guys throughout the rest of this conference, what does it mean for us to put the agent back into user agent? We talked about images, but what are the other problems that we're not thinking about that perhaps you're fighting with that the user agent could solve better, both as a low-level primitive but also as a high-level primitive, that it can be applied across the industry? So with that, thank you.

(audience applauding) - That's, I really like the head, tail, torso breakdown. It's really interesting, 'cause I think it's very tempting for us, and I think, scratch that, it's not tempting, I think you're absolutely right.

Like, when we are in a room like this where we're surrounded by people like ourselves, we tend to have this sort of perspective that we're all talking to other developers like us. And it tends to be, Henri mentioned a little bit, like the comment this morning he made was that he talked, somebody after the conference was talking about how they felt like a lot of performance talks, they felt like they were being judged or talked down to. And I think that's, a lot of that is that communication barrier.

Like, that perspective of head versus tail versus torso is helpful. So for this, this conversation, and one of the things that struck me I guess, is do you, we need to kinda be okay with good enough solutions to some extent for some of this, right? - Yep.

- Have you started talking to developers already about this? Are you finding that we are having a hard time accepting some of the proposals because of that? - Yep, so as I said, some of the, I found it interesting that some of the early reactions for the proposal, so you post a proposal in a developer setting, and the first people that see it are the developers that have been thinking about this problem. 'Cause it's like, "Oh, wow, there's a new image related thing, "I've been thinking about images for 10 years. "Let me look at it, what? "You're gonna auto-optimize everything? "Crazy talk!" That was the first feedback, and it's been interesting to react to it and just kind of absorb it and understand, so why is it we have that kind of knee-jerk reaction to it? And it took me I guess a decade almost to come to this conclusion.

'Cause I went through all these stages.

It's like, to optimize images, it's lack of knowledge. I'm gonna go write a 10 page web fundamentals guide on image optimization, which by the way, many people at Google make fun of me about. It's like, "Really, you're gonna talk "about byte compression?" Anyway, so, wrote that and did lots of talks. And I can repeat these talks at every conference and there are good case studies, it helps, but it's kind of like, you try to stop the bleeding but you never solve the actual root problem. So you can't win at that game.

So then you move upstream, and you're like, "Okay, really it's about tools.

"What we need is better automation.

"Let's go figure out how to get servers to adopt this, "and CDNs to adopt it." You have some success there.

But then you cross that rubicon and you're like, "So why haven't we still solved the problems?" You keep looking at the HTTP archive data and the damn needle doesn't move.

And you're like, "Ugh!" And then you start getting into, "Okay, well let me talk to hosting providers." And then you start to appreciate the differences in business models.

So it took me a long time to go through this journey and appreciate it, and I think I'm just, maybe I'm not the only one that actually understands this problem, but I think it's socializing that and understanding it and appreciating it.

It's like looking at it from the lens of, okay, this feature is being designed for this group of people.

It'll likely be adopted by this group of people, and maybe that is the shelf life of it.

But what about, how do you solve that same problem in this segment of the market, that will never need to learn about performance? - So the other thing that struck me, particularly on, like when you give the iOS example, like the image upload stuff, iOS has the benefit of, in that particular scenario, owning the entire system, the platform.

They don't have to care about, is Android going to implement some sort of an image process thing.

Make sure that we're matching up our signals and stuff like that.

That doesn't work on the web, right? Like, the web we want, we expect now some sort of consistency at least with any sort of standardized thing, we expect some consistency in behavior across browsers. Now, there's always a little bit of different, maybe, in some of the UI elements to some extent.

But I'm wondering about is something like this, this image thing.

Like there's a high-level of user testing that has to happen for us to make sure that what is projected by the browser is something that the tail is going to be comfortable with and use. Do you have any plans, like how is that going to look balancing, being able to test what the actually outcome looks like, versus the fact that Safari and Firefox and Chrome might all disagree and they're all gonna implement at different times.

- Yeah, that's a good question.

First of all, the iOS example that we showed is a native intent.

So it's the same intent that you would get outside of the web browser as well.

So they have that benefit of, they enforce that effectively across the entire platform, which I think is actually a neat solution across the board, 'cause they're actually acknowledging that this is not just a problem on the web, it's also a problem in apps, in many apps.

The good news is, that is a problem that browsers like Chrome are solving.

We run on all platforms.

So we can't rely on native infrastructure.

Like Safari relies on their network stack, Chrome ships their own network stack because we need to work across all these different things. So there are browsers that are able to deploy that across all platforms.

To your point of differences in the UIs, that's a really good question.

Good news is that we have smart UX teams that know how to optimize these things and do the appropriate user testing and derive the right solutions.

I think it's actually a good opportunity for different browsers to try and experiment with different things, like what makes most sense. One could imagine that there's a browser that, crazy talk, provide some set of filters and more than just basic photo manipulation, or like, resizing, right? - Sure.

- I actually would love to see that.

Like, what would a browser for content creators look like? An integrated media library with some basic editing capabilities.

We need to empower more people to publish on the web. So I think that's an amazing space for us to explore and innovate and alongside that, we'll get more primitives baked-in that allow us to do the optimization in place. - I think that's, I mean, I'm all for the empowering folks, more people to get on the web and to be able to be content creators, 'cause I think that's one of the big challenges that we have right now. Like, if anybody's paid attention to the Indie Web kind of conversation, everybody, I think, fundamentally, is like, "That's great! We love the fact "that you own your own content "and everybody's empowered to do it," and that may be the right way to do it for, from certain perspectives.

However, the complexity that's involved is something that we absolutely need to solve if we want to actually make that work at scale.

One of the questions that somebody posed from the audience was when you were talking about, particularly, incentives and stuff like that, it was inevitable it was probably going to pop up, but the Google signaling slow sites, the article that came out recently where they were talking about the slow loading site is presenting some way to show the user that this is a slow site.

Just trying to get your perspective on that, you can pretend that you don't work for Google. We are recording, but we just won't let any Googlers watch it, I don't know, we'll ban some IP addresses, but also in which part of the segment do you think that that helps, if it does help? I mean, is this a torso thing, did I say that right, torso? - Mm-hmm. - Okay, it felt like I

misspoke for a second.

- That's a good question, I need to think about the segment. So first of all, if you haven't seen the article, this is something, so last week we had the Chrome Developers Summit, and in there we demoed some very, very, very early UX explorations around, what would it mean for a browser to provide some incentives around speed? And a good example here is HTTPS.

We've been on this march over the last five years to migrate from by default having the gray bar to be HTTP, but what we want everyone to get to is the gray bar being HTTPS.

They should be assumed by default that you are secure. And along the way we had this trajectory of, well in 2017 we're going to start marking pages that have a form as red in the bar because it's not secure.

So that's a really compelling example of where it actually worked.

People clearly paid attention to those incentives. Those were not the only incentives, but I think browsers, Firefox did the same, Chrome did the same, I think those incentives had a huge impact on moving the industry forward.

And by the way, the industry was kicking and screaming along the way.

It's like they did not want to change.

But I think at the end of the day, we made the right call, we made the right trade-offs, and we're in a better place now.

So what we're doing here is applying that frame and saying, "Could we do the same thing "to web performance? "What would it mean for our browser "to highlight good performing sites?" and we're very, very early on that exploration. So please don't over-index on those particular mocks that we have there, but the basic idea is somewhere in the UI, if you're visiting a site that is fast, we want to in a way reward it, like highlight this is a good experience.

Or maybe if it's taking a very long time, it can give people some context on like, yeah, the sites usually takes slow, ha.

- (laughs) Yeah little subtle shade there.

You know, the one thing that sends out as a little bit of a difference between the security signals and the performance signals there is that with the performance signals, as a user, I'm probably aware that it's loading slow right now. Like in the moment, I'd probably get it.

Whereas like in security, one of the things that I thought was so fascinating about the security movement, that whole TLS everywhere thing, was that we had to attack the sites and the developers alongside informing the users and educating them and then building up that knowledge through the information there.

Do we need to worry about educating the users about the performance of this stuff or is that not less of an issue? Is this more purely, like on incentivizing the actually site owners to do something? - I think that's an experiment that I'd love to see run.

we do know that users learn behaviors kind of ambiently. So if you interact with a fast site, you interact with it more.

Or actually, to put it differently, we've run experiments where we've intentionally slowed down the site and then removed that, and we observed that the users who were exposed to that control group actually interacted less, because over time they've learned that they expect that the sites gonna be slow.

So they just do, for example, fewer Google searches. So what would it mean for us to expose more digital cues about speed? Would it change that behavior? I don't know.

But I think that's a really interesting, very interesting to explore. - Yes, it's interesting.

- Like if you have an actually green indicator when you browse a site that tells you that, yeah, the site is like, "It's secure it's fast it's good," would you use it more? I don't know.

- You could see there's part of me that, like the performance nut in me is greedily rubbing my hands together at the idea of the fact that this could make those studies even more impactful, like the correlation even stronger in a scenario like that, when it's overtly signaled like that.

All right, well, thank you Ilya, that was great. - Thank you.