Introduction to day 2

Stories, Sly Jabs, and Setting the Stakes for Performance

Tim warms up the room with playful family anecdotes—trolling his teenage daughters and embracing “old man yells at cloud” moments—to segue into why he’s been rethinking web performance. He uses humor about aging, involuntary groans, and “as the kids are saying nowadays” to tee up a deeper question: what’s really bothering him about today’s web. This opening establishes a personal, reflective tone that leads into the talk’s central theme of defending users. It frames performance not as numbers alone, but as lived experience that we can either respect or erode.

Rediscovering “Stubborn Empathy” and How We Lost the Thread

Tim recalls Nate Koechly’s “Professional Front End Engineering” and the Yahoo era to spotlight a principle that shaped his career: practicing stubborn empathy for users. He explains how availability, defensive design, and advocacy once anchored performance work. Over time, he argues, softer human-centered aims faded behind dashboards and metrics, even if no one set out to create bad experiences. This segment re-centers the talk on user-first judgment and introduces two threats to it: the monoculture and the machine.

The Browser Monoculture and the SEO Carrot

Tim examines how a single-engine focus—building, testing, shipping, and verifying primarily in one browser—quietly shifted the industry’s priorities. He credits Core Web Vitals as a net win, yet shows how the SEO boost nudged teams toward “three green ticks” instead of genuinely better experiences. He names this dynamic an amplified “Localhost Delusion,” where lab wins mask real-world variability. This reframing challenges the audience to see metrics as means, not ends.

Demo: CSS Placement Pitfalls Across Chrome, Firefox, and Safari

Tim demonstrates a simple page with a delayed stylesheet to show how browsers differ: Chrome progressively renders early content, Firefox is slightly more lenient, and Safari blocks the entire page when CSS sits in the body. The example uses a delay tool from Ryan Townsend and Harry Roberts to exaggerate timing and highlight that LCP may be unaffected in some engines while users still suffer in others. He concludes that cross-browser testing is user advocacy, revealing issues single-engine testing would miss. This live comparison anchors his call for broader, defensive testing practices.

Cross-Browser Vitals Arrive and New Tools to Measure Them

Tim shares encouraging progress: INP and LCP support rolling out across Firefox and Safari Tech Preview, bringing the ecosystem closer to cross-browser measurement parity. He cautions against direct apples-to-apples comparisons, yet previews Cloudflare RUM data (25B datapoints) showing P75 INP and LCP gaps between desktop Chrome and Firefox, suggesting room for competitive improvement and shared learning. To enable better diagnostics, he announces Cloudflare’s open-sourced Telescope agent for cross-browser performance testing and invites community contributions. This segment positions pluralistic metrics and tooling as catalysts for healthier competition and better user outcomes.

The Machine: AI Slop, Bigger PRs, and Why Judgment Matters

Tim acknowledges AI’s promise as a sidecar for analysis, then warns about “AI slop”—code optimized for generation speed over long-term quality. Citing data, he notes AI-assisted work correlates with a 154% increase in PR size, 91% longer reviews, and 9% more bugs, excluding security, accessibility, and performance concerns. He argues the machine abstracts without accountability; only humans can apply stubborn empathy to protect real users on real devices. Tim urges rigorous review of generated code so we remain the user’s last line of defense.

Agentic Web Tensions: Crawlers, Referrals, and Control

Tim critiques early “agentic web” behaviors that feel misaligned with the open web, highlighting a stark crawl-to-referral imbalance: Google at ~9.4:1 versus OpenAI at ~1,600:1 and Anthropic at ~71,000:1. He warns this lopsidedness commoditizes content and exploits the web’s experience gap, echoing prior detours like Instant Articles and AMP. Still, he allows for a future where agents complement, not cannibalize, the web—drawing an analogy to moving beyond text adventures. The call is to protect value by improving native experiences now.

Raising the Bar: Delight, Defensive Design, and the Next Era

Tim outlines a counterattack: close the experience gap by targeting task completion efficiency, delight, and uncompromising quality. He urges teams to vet tools ruthlessly, build defensively, and aim for experiences that feel instantaneous and immersive. By re-centering stubborn empathy, he argues, we can resist commodification and guide the web’s evolution on users’ terms. Tim ends with a charge to put human beings front and center—the web depends on it.

He's been a prolific web performance consultant and just generally good friend of the web for as long as I can remember.

I've seen him at so many events giving incredible talks.

His talks are always superb and I just always enjoy having conversations with him and hearing his expertise. It really is kind of encyclopedic.

You've seen that from the way he guided us through the day yesterday.

I can't think of a better way to kick us off and kind of tee us up for the second wonderful day here at Performance now.

So please, let's start as we mean to go on, a huge welcome, a massive round of applause please for Tim Cadlik.

Thanks, Phil.

There is nowhere to go but down from there. That was a really nice intro.

Checks in the mail. Yeah. So I have five kids. Those of you who know me, those of you who have seen me talk before, who have ever met me even for a moment, already know that the rest of you have your eyes popping out of your heads and like, whatever, however chaotic you think it is, it's more.

But yeah, I talk about them a lot. I tell a lot of stories.

It's weird. They apparently have a big impact on my life. I don't know, but I. Years ago, when I first, when we had the first kid, my wife warned me. She's like, don't be the dad that just won't shut up about his kids. And that's 100% what I've become. And I probably should have told some of you before last night's party. It would have saved you a little bit of time.

But then I wouldn't have had a chance to talk about them and I wouldn't have been very happy. So actually one of the first talks I ever did in Amsterdam, I talked about my kids, my two oldest daughters at the time, they were little and I talked about how I would take them on stroller rides and, you know, rambling stroller rides, never in a linear point A to point B fashion. That's not how kids operate.

Now the oldest is 16, almost 17, she's driving. The second oldest, 15, almost 16, she's learning to drive. My third oldest, she's almost as tall as I am, we don't do stroller rides anymore. They don't fit.

So we've had to find different activities for bonding. So nowadays my favorite thing to do is to troll them. It's really nice.

I used a cheesy like G rated pickup line on my wife the other day in front of them. Got two of my daughters to just stop Cold and, like, turn and walk out of the room. And the third one, like, literally, you heard her head hit the, like, the table. It was so. It was magical.

It was one of the highlights of my life. One of my favorite things to do is just throw in random. Like, I just say random stupid stuff, like made up stuff or old stuff or whatever. And then at the end of it, I say, as the kids are saying nowadays, as I lean into my daughters.

And they hate that. You know, you can do it with anything. Like, Phil, that intro was great, man. You are crushing it. As the kids are saying nowadays. 6, 7. Has 6, 7 made it over to.

Oh, yeah, I know, right? No, no. Here's how you retaliate whenever there's something going on. You're like, seven, eight, as the kids are saying nowadays. And then they're like, yeah. They groan. Eye roll. It's so good.

It's incredibly satisfying. They dish it right back, though. My third oldest in particular, she loves to dish it out.

She's teasing me all the time about my gray hair. I don't have any gray hair. That's what I tell her. That's what I will tell you. What happens is, especially sometimes the light, there's a few hairs.

The way that they sit, it's a lighter color. It's like a blonde is what's happening here. So just wanted to make that very clear.

There are other signs, though. There are other signs.

Occasionally I groan for no apparent reason. Now when I get up out of a seat, I feel like I'm in good shape. I don't understand why. I'm not that stiff. It happens involuntarily. It's like there's some sort of, I don't know, unexpected, like, layout shift in my spine or something.

A couple of you gave me the groan that I was looking for, so that's nice. I also get grumpier at times than I remember.

I've definitely had more than my fair share of old man yells at cloud moments in the past few years. And it's made me spend a lot of time thinking about, like, why, like, why? Why am I, like, am I getting old? Is that what it is? That was where you were all supposed to say, no, like, that. You missed your cue. That's all right.

But seriously, what is my beef? As the kids are saying nowadays.

But it's made me stop and think about performance and what attracted to me in the first place. And as mentioned yesterday, that, like, there was a talk that had a major impact on her and kind of pushed her towards sustainable web design. My talk is a bit older, but there was a talk that I watched early on in my career.

It was by Nate Koechly called Professional front end Engineering.

This thing holds up, by the way. It's very old, as you can probably guess by the very PowerPoint corporate sort of backdrop there.

By the way, he was at Yahoo at the time. Yahoo, like that tech team. Amazing. The stuff that we got out of that tech team, the foundations they laid for the web is just absolutely incredible. Not enough credit for that. But this talk was phenomenal and it was incredibly influential on the way that I viewed the web. And there was a lot of ideas that he laid down about availability, about building defensively for users, about advocating for them. But the thing that really stuck with me was when he talked about this idea that our job comes down to supporting users to make sure that they have that great user experience. I especially love the phrasing that we want to have stubborn empathy for what they need, for what they want, for what they're going through. And what can we do to make all of that better? Like that is our job.

I've always loved that. I always thought that principle, that concept of stubborn empathy was so elegant and it's so important. It's just as important as it is today as it was then, maybe even more important. Right, because we have more variables now. You know, we have more device types, we have more random connectivity types and issues. We are doing more and more on our devices and on our browsers. If anything, this principle is more important today than it was when he first talked about it.

But man, is it easy for that to sort of fade away a little bit, for us to just lose focus just a little bit on it. And I don't mean intentionally, it's never intentional. As I've said many times over the years, I've never met anybody who sets out to create a bad experience. It's not anybody's goal. But it does take a back seat sometimes it fades away because particularly in industry like performance, where what we do, what we focus on, what we report, on what we're doing, defined by what we met, it's all metric and number based, isn't it? That's what we talk to everybody about. That's what our goal is all the time. And so it's easy for something that's a little bit more of a softer concept, like this stubborn empathy, like defending the user, to kind of fade away a little bit. So I do feel that there are some factors that have, you know, put that principle at risk. And I wanted to highlight two of them today that have been on my mind, and that is the monoculture in the machine.

They go well together. That was completely. I'm not that it was accidental.

I liked it, though. So, first off, the monoculture, I think everybody can probably see where I'm probably going with this, but for a few years now, we've been primarily focusing on a single browser engine. We've been primarily chasing metrics that have been supported by a single browser engine. We have been primarily testing in a single browser engine. We have been primarily shipping features and new standards that work in a single browser engine.

And we have been primarily verifying them with things that can only be reported on and collected in that same single browser engine.

I want to be very clear. I'm not throwing. This is not. I actually love the Chrome team. It's huge. It's important. What they've done for the web cannot be understated. I think it's amazing. This is not through a fault of anything like that. This is just what happens when you have innovation happening at different paces. And core web vitals, which I've talked about at length and Harry mentioned it the other day.

Yesterday. The other day. Wow. I am getting old.

Harry mentioned it yesterday. It's an amazing. I think it's been a NET win for the web as a whole. Absolutely. And I think these metrics are fantastic and they actually come from a very human centric background. That was the entire reason these metrics existed. We were trying to find metrics that help us break down performance into that user experience, those discrete components of the user experience.

But if we look back at core web vitals, when did it really start to push off in terms of, like, getting adoption, getting the performance industry, like getting budgets to open from companies.

It was because of what. It was because of the SEO carrot.

Right. Like, that was. That became what unlocked all these budgets.

Now we can say that, like, it's a NET positive again. Like, hey, we get better. We're making the SEO people happy. That's great. It's also good for the users.

That's good. There's like a residual effect here. But it did start to shift again, just very subtly, that framing in our mind.

Right. The goal shifts ever so slightly from what can we do to optimize that user experience to how do we make sure we've got those three green ticks so that we appease the SEO thing and get our little bit of a bump? And it's a subtle thing again. It's not a nefarious thing, but it's one of those things that kind of creeps in there and we start to lose focus on what our actual purpose is. And so we start to focus on that set of metrics on the one browser that has supported them for so long. I think an event like this, we're all pretty familiar with a concept I like to call the Local Host Delusion, right? Like firing up your own machine and running a performance test and everything looks amazing. I think Harry probably mentioned in his talk, I know I've worked with companies before where I've seen that like somebody shares me lighthouse on their local machine and like, oh, we look amazing. Our web performance is fantastic. But obviously that's not what happens, right in the real world. There's much more variability, there's slower devices, slower connections, all of those kinds of things. Things testing in a single browser is like ramping up the local Host Delusion 2.11.

It's an extension of that same problem and it results in the very, the same very dangerous side effect of overlooking potential issues and gotchus that are impacting real users. There are differences.

I could do a list. I don't have that much time. We're just gonna do one. This is a very simple HTML. We've got a header one, we've got an image, we've got a style sheet, we've got. We've got a paragraph. The style sheet is being loaded from slow files. Ryan Townsend and Harry Roberts. I don't know if they're here yet or if they were out too late, but they've built this service. It's fantastic, very simple, but it lets you get different resources and apply a delay. In this case I'm applying a three second delay just to sort of exaggerate things. Now if I fire up this page inside of Chrome, you'll see what happens is we get this cat, we get the header.

A little bit later, once the CSS is loaded, we get the paragraph and then we see that the LCP metric is actually not impacted at all. What's happening here is we've got some progressive rendering happening. Chrome is doing a good job of getting stuff onto the screen as quickly as possible and only blocks the stuff after that css, but it does not impact our metric.

And again, the content is out quick. This is the same page in Firefox. Interestingly, the paragraph is actually after the style sheet and yet that comes in. So there's progressive rendering, but there must be a little bit more leniency or Something in place there. Chrome actually holds back on that paragraph until the CSS arrives. But Firefox gets that out too. But again, you'll see that the image got out pretty quick. There was just a slight loading difference there. LCP was unimpacted by the three second style sheet that only blocks stuff after on the page. This is the same page in Safari.

Safari does not do the progressive rendering. So what happens is that CSS that's sitting in the body blocks the display of the entire page until that CSS arrives. We've seen.

I've seen plenty of sites where the CSS is now interspersed within the doc and I've actually seen some advocacy for like oh, progressive rendering is a really cool thing. We should take advantage of it. Don't hesitate to spread the CSS out.

But as we can see, this is a very negative impact in Safari.

It can have very real consequences of doing this approach. And again, if we're not testing outside of Chrome or Firefox in this case, we're never going to see this.

Cross browser testing is an act of user advocacy.

It's a way that we can defend the user, we can protect the user, we can make sure that we are meeting them wherever they are, no matter what the scenario is. And that's why to me, quite honestly, the most exciting news of the past year for me in performance has been the fact that we are going to have cross browser support of some of these metrics that we've come to know and love and rally around. We've had imp interaction and XPaint in Chrome for a while. Firefox.

Firefox team is here. Kudos to you. This is two straight personals where there's been a really fun thing to talk about with Firefox. I think last year LCP came out like the morning of one of the days of the conference.

INP shipped inside of 144 which was two weeks before the event. Firefox. I don't know if this is an intentional or not Mozilla team, but I really appreciate it. Keep it going. It's kind of fun to like. Yeah, that's perf now. It's new, it's cool.

Safari's not that far behind. INP is actually in Safari tech preview version 229. So that's coming too.

I know that looks really bad. If anybody's been keen eyed and watching the INP value, it's early. We don't know if this is a measurement thing or if there are things happening. I wouldn't Worry about it.

Okay. Barry's saying he fixed it. I'm not too sweating like about metric differences and we'll talk more about that in a minute. But it's awesome that it's coming.

Lcp. LCP has been in Chrome obviously for a while. LCP has been in Firefox, as I mentioned, since last performance now, and it's coming to Safari Tech Preview. It's in version 230.

You can actually fire it up. You can get your LCP metrics and start to measure and see that as well. So we are this close after a while, we are this close to having cross browser, First Contentful Paint, largest Contentful Paint input and all of the three major browsers. That is fantastic news. And it's frankly the thing that's got me the most excited in the last year. Honestly, it's massive progress.

Yeah, it is worthy of an applause. Yes.

Now, I mentioned that the Safari data looked a little bit bad and Barry City kind of took care of that. But the caution I was going to say anyway on this was that I would caution too much about comparing across browsers. We saw this with First Contentful Paint.

They're implemented according to the spec, but each browser does slightly different things.

So you're getting slightly different data points. It's not quite apples to apples.

That being said, I couldn't resist. So I fired up.

I looked at some Cloudflare offers rum. We looked through the data, about 25 billion data points, which is just fun to have that number.

It's big. And what we saw for the P75 for INP focusing on desktop only because we wanted to isolate because Firefox and mobile isn't really a thing. P75INP on desktop only for Chrome is 88 milliseconds, 120 milliseconds for Firefox. So we do have a little bit of a gap there. Interestingly, the flip flop happens on LCP.

We see 2.26 seconds at the P75 across our users on Chrome and 1.86 for Firefox.

Now these could be measurement differences, right? Like I haven't looked at how, you know, the different, like where we're hooking in inside of these different engines that probably contributes in some way or another.

But there's also a little bit of a gap, significant gap in each of those.

So maybe there's also an opportunity here. We've talked about this before, but like the idea that like this is why we know like monopolies and monocultures, we don't like these things, right? Because competition is what, like that refines everything. That's how we get better products, that's how we get a better web.

And it is entirely possible that what we're going to find out here in the next year now that these metrics start to ship, as we start to get more granular insights that, hey, there's something really freaking clever that Firefox is doing or something really clever that Safari is doing that we can take and make everything faster as a result, there's going to be opportunities for us to learn this stuff. It's going to require a lot more testing, but I'm very excited to see that happen because I think there's going to be some really cool things, some really cool innovation that happens as a result of this.

At Cloudflare, we think this is really important.

We're talking a lot about that this year with the cross browser stuff. Not going to dwell on it, but the other night, if you were at the meetup, one of my colleagues, Mike Kozicki mentioned we just open sourced a cross browser performance testing agent called Telescope.

It's new. I mean, it literally launched this week. There's plenty to do, there's plenty that it already does. The priority focus here is to provide accurate diagnostic data from all the major browsers.

And to do it, we just thought it was important to have that another cross browser testing tool available. We think it's important that the community gets to contribute to guide the future of it. And so this one is completely open source, personal use, commercial use does not matter. All we want to do is make it better. Was really excited, actually. We got our first community contribution last night. Kevin, are you here this morning? Yeah. All right.

Kevin fixed a little Linux bug for us, so we merged that in. It was kind of fun. That was nice, it was exciting. But yeah, please, you know, if you think this is as important as we do, like just help make it better. Don't care what you do with it, just make the web better.

The second piece is the machine. This is the second thing that's got me a little bit worried. And this is where we come back to my old manuals of the cloud thing. Umar gave a great talk yesterday about dev tools and he talked about how he's using AI to help to do some of that analysis and some of those insights. We're going to have another talk later today that's going to get into more of the MCP and the logistics and the technicalities of how do we refine that for performance analysis.

Those are good examples of where I think AI can be very helpful, as Umar said, in that sort of sidecar complementary role. I'm excited about that. I'm excited about it. When you kind of reduce it from the large scale and focus more on like smaller contextual ideas, I think that's really cool. I do think AI does pose a couple threats though.

The first is AI slop.

I am so sorry for this visual. This is hot dogs. This is Ike.

This is hot dogs. We have a few hours before lunch. I felt like this was probably safe now. So sorry, I was just.

It was very visceral. Generative AI produces a lot of code very quickly and developers are being put under a lot of pressure to use these tools to generate code.

I worry that we're racing a little bit too quickly down that path without really thinking about what's coming back from it yet. See, the thing is that if you look at the code that gets generated by these tools, they definitely favor generation time over lifetime quality. That is not the focus of this output. As I've put it a few times. Playing with these tools, using these tools, pretty good for a weekend project. You want to do something that's going to be in production for a month, two months, three months to a year.

Oh my gosh, does it become a nightmare pretty quickly?

It's not great. The data on this supports this. By the way. There was a great study that was talking about like the impact that these tools have had and there's a 154% increase in average PR size when it's used by a AI driven code editor or tool. 154% increase in average PR size.

That leads to a 91% increase in the time that it takes to review those PRs. It takes more time to review more lines of code. A lot of us are engineers in this room or have been in the past. What happens?

What do we get more of when we get more lines of code? We get more. We get more bugs.

So we're also seeing an increase of 9% in terms of bugs per developer when we're using these tools. I want to be very clear when they're talking bugs, they're talking about, oh, this thing didn't work.

They're not talking about security vulnerabilities in the study. They're not talking about accessibility issues and they're definitely not talking about performance considerations.

That's on top of this. That's like that. We get to pile that on.

It's plus 9%.

So the quality don't get me wrong, the quality will get better, we'll see that.

We've seen iteration, we've seen it get better over the year, but again, we're kind of rushing down this path without thinking about what we're actually using.

And the thing is, the machine fundamentally operates on abstraction without accountability. Its user is us, the person using the tool. It's not the end user who's like sitting on the other end of all this code that we're shipping out there. I've talked about how important I think stubborn empathy is and I've talked about this over the years. As Phil said, all these talks that this has been a common theme.

I'm just driven by that concept of access.

The machine has no concept of that whatsoever. There's no concept of that. There's no understanding of like the person down the street who's like battling connectivity issues, the person who's got the three year old Android phone. That's not what it's prioritizing, that's not what it's focused on and it has no understanding of that whatsoever. Our job becomes more critical than ever in this environment and we need to apply that stubborn empathy. We need to be very critical. We need to have that human judgment to every single line of generated code because we have to remember that we are the user's line of defense. And if we're putting this stuff in front of them without critical careful consideration, we're doing them a massive disservice. I've mentioned kind of jokingly, kind of not kind of sadly, that I actually think that if you're a performance consultant or an accessibility consultant or a security consultant, the next couple years are going to be great for you. Like there's going to be so much stuff for you to clean up. And so I'm a little concerned about this concept as well of the agentic web. I actually some of this excites me, don't get me wrong. I think there's some really cool things.

There's some definitely some tasks that like, oh man, it'd be great if I didn't have to manually go through that process on my own ever again. It'd be awesome.

What I worry about a little bit here is that I'm not like, as especially as we're starting to see the agentic web browser kind of thing come out, we're seeing some of the examples of that that haven't exactly been on the up and up, let's say I haven't seen much yet from this entire area where I feel like what they're doing would be anything that we would call sort of web friendly. I think it's mostly been against the grain of the web and I think it's in the browser concept. At least I think it's taking it. Have you ever heard like one company's margin or one person's margin is another's opportunity? Like, I think that's what's happening here a little bit. And the margin that they're taking advantage of is the experience, it's the performance. We've seen this before.

It's a very common angle for other things to come in when we get content, commoditized, controlled. And it usually comes after the margin of experience and performance on the web. We've seen things like Facebook instant articles. We've seen things like Google amp. We've seen things where there's that little door that we leave open because the experience isn't as great as it could be. And that's an opportunity for something else to try to come in and wedge its foot in a little bit. Yeah, it's not exactly playing nice. There was great data that one of the teams at Kleffler did. This is eye opening and shocking and frankly a bit appalling.

The crawl to refer ratio. So what this is showing you is like on the bottom, for example, you can see Google 9.4 to 1. That means for every 9.4 times or so that Google crawls your site, you're at least getting one referral link back to your site. Okay. Bytedance is like 1.4 to 1.

Look at the AI stuff on the top. OpenAI 1.6 thousand crawls to one referral back from OpenAI Anthropic 71,000 crawls of your sites to one single referral back. There's not a lot of interest, in my opinion from that We've seen not much evidence that we've seen so far that they're particularly interested in pushing the web forward so much as they are looking at as like purely a consumption thing that they get to commoditize and control. It seems a little bit lopsided, doesn't it? And so I do worry a little bit. Like we have to be careful again. I think there's potential, I think there's promise here. Somebody shared a talk. I think it's called the Death of the Browser by Rachel Nabors. I watched it very quickly this morning. It's great. She has a much more optimistic take on this, but it's interesting. We actually, I think we're looking at the same thing, we're just, I'm jaded and she's not. But like, I think there is an opportunity for these things to come together and to play nicely and I think we're going to get there. We've seen this game before, quite literally we've seen this game before. Remember text based adventures? Like that's where I feel like a lot of the LLM stuff is, right? We moved on, we moved on from this whole like, oh, I look east or whatever it is.

I don't know, I didn't play much of these. They're a lot better now because we can do better than that. I think there is a future here where the ideas that we're seeing from the agentic web where we're seeing this like task completion be quicker. They can work in a complimentary role and they work together with the web instead of against it. So what's our counter attack here?

What's our fight back against the commodification of the web? I think we gotta close that door on the margin. I think we need to reframe a little bit how we're thinking about performance on the web. We need to be focused on how efficiently, how enjoyably can users accomplish the tasks that they set out to complete. We need to set a higher standard.

Tammy, I could not love more that you use delight in your presentation because I had delight in mine and I was like, oh, are people gonna be like delight? And I was like, no, it was so great. I think we need to have a higher standard for what we're trying to build on the web.

And I think we need to be aiming for uncompromising quality in the pursuit, delight in those experiences. I think it's okay to aim that high, to go for something that can't quite be measured, but that we know when we feel and experience it. And that's only going to come with that stubborn empathy. The next era of web performance, the next era of the web in general, is going to be defined by our courage to apply that stubborn empathy, to build defensively, to vet ruthlessly the code that we're getting from these tools, to be sure that we're striving for experiences that aren't just quick, but they're instantaneous, they're immersive, and they really put the user up front and center the actual human beings who need to access the web.

I think this, the stubborn empathy is the definition of how we get to this next evolution of the web. And we have to do it because the web depends on it. Thank you.

  • Layout Shift
  • Core Web Vitals
  • Monoculture in Browser Engines
  • Single Browser Engine Testing
  • Local Host Delusion
  • Progressive Rendering
  • Cross Browser Testing
  • Cross Browser Support for Metrics
  • Interaction to Next Paint (INP)
  • Largest Contentful Paint (LCP)
  • First Contentful Paint (FCP)
  • Telescope - Cross Browser Performance Testing Agent
  • AI in Performance Analysis
  • Generative AI Code Quality
  • AI-driven Code Editor Impact
  • Abstraction without Accountability in AI
  • Agentic Web
  • Crawl to Refer Ratio
  • Task Completion in Agentic Web
  • Reframing Web Performance