(upbeat music) - So I mean, just to get this out of the way first, it's gonna be really clear you're in the engineering room just from the quality of my slides.
There's no confusion with the design track. Yeah, and so hopefully it won't be too boring but we're gonna spend the next 45 minutes or so talking about metrics.
And hopefully, it'll be not just the raw details of the metrics, but sort of the backstory, which metrics work, which ones don't, why they don't work, why we still don't have? And this will be a spoiler alert, there is no one number that tells you how fast your user experience is. There is no one number that Google can use to say, rank fast sites for example.
And depending if you're black hat or white hat, you'll also get out of it how to game each one of the metrics to make it look faster in case your comp happens to be tied to making one of those metrics look faster. So, back sort of way back in the day, this is before Steve Souders' book back in 2007, give or take, web performance was largely looked at from Ops as the server response time, how quickly can my server shove pages out to the internet or requests out to the internet? Largely didn't even sort of care about the whole concept of a page.
It was, this is my server, what's the requests per second that it can spit out? And it's actually still fairly popular to be looked at by the Ops teams.
That's pretty much all they care about, because that's the part of the world that they run. The servers, we need to be able to handle the load. We care about churning out requests per second. Users don't care.
Users care about how quickly my request comes back, not how quickly, how much you can scale the back end. And so, interesting problem with this is, and it's becoming more of a problem these days. Nginx, for example, as a web server is designed to churn through requests per second as quickly as possible, basically to, for the benchmarks, to rank really high in the benchmarks.
To do that, they basically can't prioritise HTTP/2 correctly.
And so, Nginx HTTP/2 prioritisation today is completely broken, because of shoving data through. And so, this is one of those cases where, depending on what you're optimising for, you're kinda gonna shoot yourself in the foot in other cases.
And spoiler alert, the next talk, you're gonna be hearing all about HTTP/2 and why it's broken.
So I'm not gonna have to go too much into that. Steve Souders back in 2007, he's sort of the godfather of front end web performance these days.
He was at Yahoo at the time, created YSlow, but sort of one of their big revelations was you know that server response time that we're talking about and then the Ops team is sort of myopically focused on is only 10% of the actual user experience.
90% of the time is spent loading all of the stuff that the HTML references.
It's a little scary, but 12 years later, that number has roughly held.
We go back every year to look at the HTTP Archive, it scans, currently, it's 2 1/2 or 4 1/2 million pages on the web. And it's still the case that only 10% of the time to load the content is spent loading the base HTML. And so, that was sort of the beginning of the shift into looking at the front end of the performance, the browser's view of the performance and how long it takes to load.
And so that's when we came into whoo, the age of the front end.
Time to First Byte is still important.
This is the time when the server responds with the HTTP headers, and maybe some of the HTML of the content.
And this was one of sort of Steve's critical optimizations was, if your HTML or your server takes 10 seconds to do all of the search and the backend queries and whatever else it needs, give the browser a head start. You know the static JS that all of your pages are gonna use, you know what the tidal is going to be.
So send that HTML first, give the browser a head start on fetching the JS and other things while you're still finishing the backend.
For Google, this is the backend for the search results. And if you look at a Google waterfall, it'll look very much like this.
There's a chunk of data at the beginning of the HTML response, and that's the static stuff that doesn't change, and then the actual search results, the backend calculations and everything else come in the second chunk.
But in the meantime, the browser can start downloading the CSS, the external scripts, your third parties and everything else that don't change.
And so, this was the early flush of the content where send the static stuff out quickly, get your First Byte Time back to the browser as quickly as possible and then worry about getting the rest out.
If you have a slow Time to First Byte it is guaranteed your web performance will be slow. The browser can't do anything until it has HTML to work off of.
And that's sort of, a fast Time to First Byte won't guarantee a fast experience, but a slow one will absolutely guarantee a slow experience. And so, it's still one of the core critical metrics that we look at.
And in the whole Google search will optimise based on your web performance, one of the definite correlations you can easily see, I don't know if the new Search Console does it, the old Search Console definitely had it.
In the First Byte time, as the crawler crawls your pages, the faster your pages respond, the more content it can crawl.
And so, you'll see the number of pages crawled per day is higher when you have fast response times just for the HTML.
That's sort of the genesis of performance impacts your rankings and your SEO and search and all of that kinda stuff.
It's based on the HTML and the First Byte.
Page Load Time, this was an easy one for browsers. This is the onload event.
Before single page apps became a thing, this was largely when the content was complete. It's a good endpoint, it's still used fairly, fairly often, it's still referenced as when people say the Page Load Time took X, they're usually still referencing the onload event. It's just the usefulness of the metric has somewhat declined since it was originally created.
Single page apps, Gmail, for example, the we're loading your mail spinner, or now the M for the mailbox animation, that's the onload event.
So the page load event, depending on what your content is, may or may not mean anything, it may be way faster than the actual user experience. And there are cases, retargeting pixels, ad trackers, that kinda stuff.
All of those if they're at the end of your HTML, for example, the browser is gonna have to fire all of those, even though the user is seeing your content interacting with your content, all of the invisible stuff is still firing in the background.
In that case, the Page Load Time is way slower than the user experience.
When Google announced that speed was going to impact your ranking, that was the sort of golden age of gaming metrics and my favourite one was an accelerator CDN to remain unnamed for now.
And so, that's one of those cases where they gamed the metric, sucked for user experience because injecting the HTML after it was parsed and loaded removed the ability from the browser to incrementally parse the page, but they could game the metric against their competition, and we can say we can give you the fastest Page Load Time. And they, and since Google never sort of really announced what they cared about performance they could also say, "Hey, and it's also gonna help your ranking." DOM Content Loaded, this is the point when the browser, the parser for the browser, that's what goes through the HTML, it's the main instruction sheet for the content. When the parser gets to the end of the HTML, it's executed any of the inline scripts, the external blocking scripts.
When it reaches the end, it will fire DOMContentLoaded. The DOM of the document is complete at that point, you can generally, the structure of your app is complete. The async images may not have loaded, you may not have any visible content yet, but structurally, your document is complete. This is still one of the better technical metrics as far as the user experience goes.
It's not perfect, because like I said, there might be nothing on the screen, there definitely won't be all of the images on the screen. But generally, the static application code has finished running and executing.
Easy to game as well though, similar to the Page Load Time, so you have to be careful, and it's pretty much in all of these cases, test your own sites, see where the metrics line up, see what the user experience looks at, at that point and determine if it's a useful metric.
Don't just sort of pick whatever metrics people are espousing at the time.
Every site's gonna have different ones that matter to them. Page is Interactive, which was this, this has nothing to do with the current Time to Interactive metrics or any of that stuff. Back in the day that was effectively the exact same as DOM Content Loaded.
One's before async scripts run, one is after async scripts run, but it's effectively the same thing.
So when monitoring companies, for example, tell you they're monitoring the time the page becomes interactive generally, they're measuring DOM Content Loaded. You have to watch out for and we'll get to what Time to Interactive and some of the new interactive metrics are.
If they call those out specifically, then those are sort of some of the more modern metrics, but generally, most of the monitoring companies are doing the old school simple browser fired, DOM Content Loaded or readyState became interactive, we're gonna go ahead and measure that.
It's really easy to measure, because it's well defined in the HTTP or the HTML spec, the W3C spec, but it means nothing necessarily for the user.
Which sort of brings us to User Timing.
So we were trying to figure out, how can we measure pages without necessarily knowing anything about the page? How can we determine across all pages if it's fast or slow? User timing was sort of the, we don't necessarily know what's up, what's important to applications.
But it gives you a unified way across sites to measure important keystones in the lifetime of the application.
And so, when you're building out your more interactive apps, your single page apps and things like that, if you start marking the points where the user experience, I have hydrated my content at this point in time, for example, now all of a sudden you can have application specific metrics that actually mean something for your users and your application.
One of the big things that came out around this time and this is sort of Navigation Timing, and User Timing all happened at the same time was it's the window.performance object was added basically to the DOM.
And prior to this, there was no start time, you had no idea when the request actually started, your code only runs when the HTML started to be delivered. And so, there were all sorts of crazy hacks to try and figure out well, what was the zero time that the user actually started to navigate? So if you're in a multi-navigation flow, the previous page unload handler would record a or a cookie that said, "Hey, this is the wall time when I was unloaded," so you know when the next page navigation actually started. And so, one of the big changes with Navigation Timing is it gave us the start time, a zero time for the actual user navigation. And so, all of the User Timing marks and all of the window.performance timing measurements, all have realistic start times that you can measure from now.
And so, one of the main uses for a WebPageTest is you can, if you're not familiar with webpagetest.org it's a performance testing tool.
You put in a URL, some connectivity parameters, and you say run a test.
One of the main things it does is it records video of the page loading as well as the waterfall, and sort of the main thread activity and DevTools and things like that.
But you can compare the visual experience to the network experience to the browser events and see what was actually on the screen at the time that this event happened.
And so in this case, I'm assuming Daily Telegraph is sort of like one of the Daily Mail in the UK and more tabloidy, but anyway, it worked as a great example. DOM Content Loaded, actually, all of the content was already on the screen, all of the above the fold content, for example. Page load time in this case, so DOM Content Loaded was nine seconds.
Yeah, not great, but not the worst I've seen. Page load time was 30 seconds.
Visibly, the only difference is in the top right corner. There's a little bit I don't know if it's a sign on or something along those lines.
But it's not necessarily reflective of the user experience. And if you were to optimise for the Page Load Time, you might do things that actually delay getting the images and everything the user cares about loading. And so that's why you have to be careful, sort of what metrics you use as your goal metrics, that the behaviours you incentivize people to optimise for actually deliver the user experience gains you're trying to get.
Which sort of brings us into the renaissance of performance metrics.
It's when we started caring about, okay, those are interesting browser technical metrics that have absolutely nothing to do with the user experience. So we started paying a whole lot of time and focus to the user experience, the filmstrip, the content and rendering, when does the content render for the user? And so, Start Render, this is the WebPageTest version of when things first displayed.
Its measured by recording the video, the screen is not white anymore.
The user first saw some piece of your content. It's a great marker for when the user experience started to transition.
It doesn't tell you anything about what the user saw, or how meaningful it is.
Unfortunately, it's one of those.
If you use like a grey background colour on your page instead of a white background colour on your page, start render might look artificially fast and it may still look completely blank when you're looking at it.
And so you wanna make sure you're calibrating these metrics and checking to make sure they're actually measuring something useful for you.
First Paint, there's an ish on the line.
So first paint is the browser's view of when the content first started to paint on the screen. The browser's rendering pipeline is kind of complicated and becomes more complicated with GPUs and off-screen textures and things like that. And so, most of the browser code only knows when it laid things out, and when it sent it to the GPU to do something with to eventually get it onto the screen. And so, the first paint that the browser records, it's usually really close to something showing up on the screen, but it's not unusual for it to be blank at the time that First Paint happened, and then 100 milliseconds later, the content actually showed up on the screen. Probably also worth mentioning at this point, when you're looking at web performance metrics trying to get millisecond-level granularity is never going to happen.
So maybe an aggregate and you're looking at millions of people and you sort of see and you squint just right, you might be able to see it, but screens display 60 frames per second.
So at a minimum, you've got 16 milliseconds of jitter in your metrics just from when the screen refreshes, depending on when they started doing something. Things tend to be a lot more jittery than even that. So set your expectations that if you're trying to measure like 100 millisecond improvement, you may still have to squint pretty hard.
The noise level in most web performance metrics tends to be a lot higher than that.
RUM, Real User Measurement where you're measuring users' experience of loading the web, not in a lab. The noise levels are significantly higher than that. Just because you're also a slave to sort of user experience, user behaviour, who's loading the page at what point in time, the networks they're on, their WiFi signal, all sorts of crazy stuff.
You need a lot of users millions before you can start seeing that stuff even out. That's why I sort of, real user and lab go kinda hand in hand.
The lab metrics tend to be a lot more stable. But even in the stable case, you're still talking maybe 100 millisecond granularity.
So if you're setting goal is I'm gonna improve my performance by 10%, if you're already at like 1 1/2 second Page Load Times, you're not gonna be able to measure a 10% improvement. So something to watch out for.
This was my metric that I created at Google, and it's unfortunately, very, very, very, very difficult to explain.
And it doesn't always work really well.
But what it tried to do was incentivize people to get as much of the content on the screen as quickly as possible.
And so, we saw with Start Render, if it's like a grey background, or just like a little pixel somewhere, wa-ha Start Render happened and you get full credit for it, no matter what was displayed.
What Speed Index tries to do is look at how much of the screen is painted as early as possible. And so, if you can render all of your content except for the Sign In button early, you'll get 99% of your credit for the early rendering content, and you'll just get dinged for the Sign On button showing up late just a little bit. The way it does that is it calculates the visual progress of the page loading across time, and basically goes, each frame at 100 or 10 frames per second, or whatever frame rate it's capturing at, it goes, "Okay, this frame is 5% complete, "90% complete, 100% complete." And then it just takes the area above the curve, it ends up being like someone who knows math way better than me explained it as the average time that a pixel reaches its final state. And so, if you look at the entire rendered version of the page, and each one of the pixels in that case, you average the time that it was displayed on the screen is what Speed Index gives you.
And so, more of the pixel sooner you get a much better Speed Index than if you have a blank screen all the way until the end and pop all of your content, that will give you a really slow Speed Index. The main problem with this and a lot of these is sort of, I hand waved around how you know how Visually Complete a page is at any point in time. Speed Index, Visually Complete, you'll see in a second. A lot of the metrics that sort of depend on knowing how complete you're at a point in time take a snapshot of the screen at the end of the page loading, and assume that is the final state, and then it compares everything along the way to that final state.
That works great if you've got a static page, if you've got a video, if you've got an image gallery, where you're rotating, like the hero image every 10 seconds rotates, and says, "Take a look at this, you need to buy this," you end up with sort of randomised end states, where maybe it's the second image in the carousel that gets captured at the end of the test, and you lose your credit for displaying the first image in the carousel throughout the loading process.
And so, dynamic pages that don't sort of load to a static end state are a really weak point for Speed Index in particular.
The other one that tends to toss it up quite often is interstitials.
The you haven't clicked Okay on the damn Cookie button, big banner that shows up in the middle of the page, usually 30 seconds after you started reading the content. If the test ends at the point that the cookie banner comes up, it's gonna say, "This is the content you were trying to show the user," and you'll lose all of the credit for actually showing the content up until that point.
There are two flavours of Speed Index.
SSIM, I think is the, I don't know that they're named differently. But Lighthouse, which is Google's web performance auditing tool uses a version of Speed Index, that doesn't give you credit if the pixels move on the screen.
So if you have all of your content loaded and then an ad gets inserted at the top and everything shifts down 10 pixels, you'll lose credit for all of the content that was displayed up until that point in time because the pixels aren't in the same place. So it's sensitive to position on the screen. The version of Speed Index that WebPageTest uses, doesn't care about where things are on the screen. So you can move things around, move, adjust when you get an ad banner, or whatever. And you'll still get full credit for the content that was displayed up until that point in time. Depending on how you feel about content moving under the user, like while they're trying to read it, they're trying to click a button, and all of a sudden they clicked on the ad. Business loves it, users not so much.
One of those might be more useful for you than other. They're sort of two extremes on it though, not giving you any credit at all seems kind of painful. Visually Complete, it's very similar, in that it uses the same progress calculations that Speed Index does but it basically goes, "The first point in time that we reach 100% complete, "go ahead and call that Visually Complete." There's also variations to that which is 90% Visually Complete or 95% Visually Complete, if you wanna use a threshold that goes, "You know, I don't care about the last button, last pixel being rendered, I want when is most of my content displayed. And if you want a point in time metric, one of the difficulties with Speed Index, it's an average.
So if you get a Speed Index of like 2,500, which is 2 1/2 seconds, you look at the filmstrip in 2 1/2 seconds, you will see nothing, because it's not a point in time metric, you can't point to it and say, this happened at this point in time.
Whereas Visually Completes, you can say it was 90% complete at four seconds here, look at the film strip, there you've got your 90% complete image.
And so you've got a point in time that you can reference. The reason the 100% for Visually Complete is kind of important is that it's not unusual for sites to like the gallery sites or for sites that display something and then it goes away, for them to reach 100%, drop down to 40% and go back up to 100%.
And so, the first time you reached all of the content being available is what we generally refer to as the Visually Complete. Yeah, sort of like I mentioned, there's, by default WebPageTest will pull out 99, 95, 90, and I think 75% Visually Complete if you wanted to use them as metrics.
But you're also welcome to sort of use whatever you think makes sense for yourself and for your site in particular.
And again, this is not something that's going to be universal.
You can't say okay, me and my competitors, I'm gonna measure us both to 95% Visually Complete and just go, you largely need to test your competitors and make sure 95% Visually Complete or whatever metric you're looking at, actually means something for the site you're comparing to. And this was one of the big problems.
So when we came out with a lot of these metrics, I don't know if any of you remember PageSpeed Service, but it was Google's attempt at a hosted accelerating proxy that ran in front of your sites, and will make your site's 10% faster or 50% faster. And so, we were trying to quantify without knowing anything about the sites, how much faster is it run when it's run through the PageSpeed proxy? That's where sort of the Speed Index and the Visually Complete metrics all came from. Unfortunately, they do have their weak points. And so it's like, well, we'd have to look at each site anyway, because do we know if it was faster because there was an interstitial or not, or did it not get faster because there was something going on that was a weakness in the metric? And yeah, so this is the rotating content page for example. Chloe, the website it's real.
The mobile version of the website, it's basically a big giant image behind a little bit of text.
And that big giant image changed.
And so, the final state had the second version of the image loaded.
Even though all of the content was loaded at 18 seconds, which is the first box, it only had 47% credit, and that's only even that high because some of the colours happened to match the colours that were in the 22 second version of the image. Another reason that helps to look really carefully at the filmstrips, and not just sort of point in time metrics as far as the user experience goes, you'll notice between 19 and 22 seconds, it kinda looks like crap.
And that's because the gallery rotates images regardless of the other images actually having loaded yet. It's just on a timer, and so it changes what image is being displayed.
And so, you get this case where the image that was completely loaded is getting slid out of the way and the image that was replacing it hasn't actually loaded yet.
So you have kind of a torn image, a white that you're looking at, and it's kind of a crappy user experience.
And so it's, I don't think I have yet seen a gallery that waits for the background images to have loaded before it swaps it in.
So if you feel like creating one, I think everyone, especially in the WordPress community would love you for it. 'Cause they're the ones that sort of pick up all the plugins and just use it and aren't sort of aware of what the experience looks like.
More modern versions of some of the rendering metrics, I don't know if you've heard FCP bandied about First Contentful Paint of the Chrome Dev REL team has definitely been pushing it fairly hard. Instead of the first paint, which could be random background, First Contentful Paint is either text or an actual user image that was loaded and displayed on the screen. And so, the background colour of the page or elements on the page, the rough layout of the page won't matter and won't get credit.
This is when the first piece of content that maybe the user actually cared about was drawn to the screen.
There's been a lot of push behind it, mostly because it's a relatively useful metric but it's really easy to measure, and it's really easy to guarantee that it at least means something similar across all of the sites.
So the Chrome User Experience Report, if you haven't looked at it yet, Chrome beacons back performance data for all of the sites that people visit, if they've opted into sharing data with Google. And then the Chrome User Experience Report basically aggregates all of that data by domain. And so, you can see your page and your competitor's page and look at the distribution, the histograms of some of the performance metrics between both of your sites, so you can get real user performance data for you and your competitors.
And First Contentful Paint is one of the more useful metrics that it includes in the Chrome User Experience Report. And if you're doing any performance measurements, I definitely recommend either capturing it if you're doing RUM directly, it's in window.performance as a browser visible metric, as well as in synthetic testing tools.
First Meaningful Paint.
I know there's been a lot of talk and push for it. It's supposed to be when, after the biggest layout of the page when was the next paint that happened? In theory, it's supposed to be capturing when did the main user content display on the screen? It was a great theoretical metric, and I say was, because even though it was sort of implemented three or four times in Chrome, no one's been able to figure out how to get it to measure reliably.
The biggest layout may not actually be meaningful layout it may just be juggling all sorts of stuff around that was already on the screen.
Or even after it's done the layout, it may not actually be displaying any user content in the next paint event.
And so, it's a great sort of North Star Metric that they sort of refer to in the measuring the user experience talks and things like that. But the technical implementation, there hasn't been one yet that actually works reliably.
So you can pretty much ignore anything First Meaningful Paint related.
It's not a useful metric as things stand right now. It's had a few years of trying, I think they've moved on to newer metrics, and off of that one.
And so, during this sort of user experience renaissance, we were very, very heavily focused on rendering metrics, getting as much content to the screen as possible. Single page apps, frameworks were all really big as these were coming out.
I still think it's good user experience behaviour, to get them the more content sooner.
But the downside to that was, we delivered the HTML, we showed the content to users, and then they would furiously tap on the content or click on it and not to be able to do anything because the main thread on the browser was completely hung trying to do all of the hydration.
And so, my favourite one of this, if you ever look at, this is actually a fairly good one, relatively speaking, The Daily Telegraph.
So eight seconds, roughly, which is like here.
It's we sort of incentivized delivering the content visibly quickly to the user, but the content's not actually usable.
And so, in the past couple of years, there's been a lot more focus on, okay, when is the content visible and useful to the user? When can they expect that when they tap and click on something your app is actually going to respond and be useful for them.
And so, that's where timed interactive came from, but it sort of evolved from what they called long tasks, which are anytime the browser's main thread is doing something for more than 50 milliseconds, it's considered a long task.
Basically, it's not responsive to the user tapping or trying to do something immediately.
Sort of it calculates all of those and provides them as a list of long tasks that are available after, and it starts after the page actually rendered something. And so, there were a bunch of interactivity metrics that came out of the long tasks.
And so in this case, down below the main thread activity WebPageTest will show you the interactivity measurement of the page.
When it's red, it's not interactive, when it's green, it is interactive.
And so, it hopefully makes it easy to eyeball kind of why Time to Interactive is as late as it is. The main problem and this is one of my main peeves with Time to Consistently Interactive or TTI is you can have a single 100 millisecond slow task in your onload handler.
The odds of a user actually tapping at that exact time and hitting that 50 millisecond window is close to zero. And so, it's one of those really weak points in the Time to Interactivity metric, in that if you happen to have one slow task way late, it's gonna make your overall time look way later and not represent the user experience at all. And so, this is another one of those cases, hopefully, when you know what it all means, and you look at the waterfalls, and you can see the red bar at the bottom.
If you see a lot of red and a lot of main thread activity, you really have a problem.
If you have a lot of green down there, and there's one red thing all the way at the end, I wouldn't worry about it, no one's gonna ding you for it. Alex may complain, but your search ranking's not gonna get hurt, your users aren't gonna notice. First CPU Idle is very similar to Time to Interactive. I think in WebPageTest, I called it First Interactive for the longest time, as basically once the first point in time where we had a five second window where the main thread became interactive.
And so, this is the case where if you added a six second sleep or whatever, you'll have a fast First CPU Idle, but I think it represents the user experience a little better.
Unfortunately, it's something I can't show on a graph. But it's a calculation for what's the probability in the time after the render or DOM Content Loaded, that when a user taps on something, they're going to have a delay, and how long is that delay likely to be? And so, the Estimated Input Latency when you have mostly green is gonna be effectively zero. And so this is a much better case of a metric than Time to Interactive.
First Input Delay, this is a field metric, I believe it's in Chrome User Experience Report. This is how long did the user actually had to wait when they clicked to interact with your page? And so, this is definitely a very useful one. It's the actual measured delay.
It doesn't tell you what point in time that happened. it's just when the user tapped on something, how long did they wait for that to register. And so, we still have all of these metrics, they still have all sorts of crazy rough edge weaknesses, and ways to game them, if you so feel inclined. Where we're sort of moving towards Speed Curve, which was a monitoring company created Hero Timings, which is it'll look for the main hero image on your page or the H1 element, and it'll measure the time in, point in time when those got displayed.
Largest Contentful Paint is Chrome's latest iteration on First Meaningful Paint.
And it's basically when did the largest paint of content in the screen happen? They're just launching it now so we'll have to see how useful it is.
There's still a good chance that it'll be like a big giant ad banner going down the side of a desktop page or something which will have absolutely no usefulness, but it's another metric in the toolbox.
Total Blocking Time, as far as interactivity goes, is potentially more useful than either of the other Time to Interactivity metrics.
And it basically takes all of the red bars, mashes them together and goes, "Okay, you've got 200 milliseconds of main thread blocking "after a nine second Start Render.
The two main key takeaways that you really need to pay attention to, how are you measuring your metrics? Synthetic, in the field? What metrics are you measuring? But more importantly, are they driving the right outcomes, the incentives? As we've seen, we thought we were doing really good when we were pushing people to look at visual metrics, visual user experience rendering, server side rendering became a thing and all of a sudden, it was bad for the user experience, right? Same goes with the Page Load Time and gaming the Page Load Time.
If you're paying someone and you're holding their annual goals to make this number faster does it actually improve the user experience? And that's sort of the core takeaway that you need to watch out for on your metrics because otherwise, if someone's income is tied to it, they're going to game it if they can't actually make it faster.
Thank you, I'll be around if you want to chat. (audience applauds) (upbeat music)