Performance is About People, Not Metrics

(lively music) - I ate a lot of carbs.

I forgot to drink coffee afterwards.

If you're wondering, actually go back for a second, this talk is about people, why there were no people on the, 'cause it seemed like, you know, self.

Why didn't I put photos of people on my cover slide? Because when you look for images on the internet or stock photo libraries of people who are frustrated by using the internet, you get stuff like this.

If you want any of these images, these are the agencies where you can get them from. I chose not to.

Because I've been doing these kinds of talks for a lot of years, I've seen a lot of these, these kinds of images.

Probably better image to take would have been my face whenever I got this at my hotel, whenever I try to log on.

One megabit per second.

Like, I thought it was a typo but it's true and I've been experiencing this reality for the past couple of days.

You people are brave people.

You're in Australia.

A little bit about me.

For the last almost 10 years, I've been working kind of at the intersection of user experience research as it relates to web performance.

Basically, the speed and availability of web pages. And business metrics because at the end of the day, everybody wants to know, okay, well great, I'll make my pages faster, my users will be happier, but who cares if it doesn't help my business? I've been working on that for many years.

Kinda going back a little bit further, my undergrad degree is in English Literature and I have a Masters in Publishing.

Of course that leads to here obviously.

There's a really direct path there.

Actually, when I was studying literature, there's a lot of, I studied literary theory, literary criticism. There are a lot of different types of literature. How many people here have studied in any way even as amateurs as lovers of information literary theory, literary criticism? Okay good, nobody's gonna ask me how much I actually remember.

There are a lot of different types of literary theory out there.

There's like structuralism, post-structuralism, modernism, post-modernism, formalism, Russian formalism.

The type of literary theory that really resonated with me back in my student days was a type of literary theory called reader response theory. It was very, very political.

If you could get a yelling argument through people who care about these things, I know because I have, that it's the only type of literary theory that actually focuses on how people, readers engage with the text and seeks to understand that.

A lot of academics hate the reader response theory because it's really nebulous and it's hard to measure. I think that my early fondness for that particular very difficult to measure way of studying literature has sort of led me to where I am today.

I really wanna understand how people use the web. I find the web fascinating.

I was an extremely early adopter back when the internet was grey and had one font and no pictures.

This idea that we can try, at least attempt to study the internet is endlessly fascinating to me and how people use it. My early days as a user experience researcher involved sitting in a lab that I made at the first company that I worked at, the first web consulting company that I worked at 20 years ago, where we just had a few computers, some Macs that would make you laugh if you could see pictures of them today.

They were state of the art once too, I should point out, but still.

We would invite people into the lab and we would give them sets of tasks and we would watch them perform these tasks. It was so fascinating to me just watching people, how you could design a site and think, okay, this is how I think people are gonna move through it. But when you ask them to do things, watching them and how they do it is, anybody in here who's done user experience research and usability testing will tell you it's a really humbling experience. There's how you design the system and then there's how people use the system, and they're very different things.

Over the past 20 years, my work has kind of involved larger and larger data sets. In the early days, we'd be lucky if we got 20 people to come in to the lab.

We thought that was a really great test.

Now, I do research that involves looking at billions and sometimes even tens of billions of user sessions. Like gathering all the data and kind of slicing and dicing it, try to understand how web performance, the speed and availability of pages affects people. I'm Chief Experience Officer at SpeedCurve. We're a company, we do performance monitoring, synthetic and real user monitoring.

We've worked for companies like Forbes and Expedia, so a lot of retail and media sites amongst others. Actually, a quick show of hands.

How many people here have done synthetic monitoring or use synthetic monitoring tools? So a few, okay.

How many people here have done real user monitoring or RUM? Okay, it's about the same.

A lot of the studies I'm gonna be talking about involve synthetic testing and real user monitoring. I also maintain a site called wpostats.com which is actually a collection of case studies, industry case studies, not just ones that I've worked on but across the industry showing correlations with team performance and conversions or revenue or bounce rate or any other number of user engagement of business metrics. Yeah, I wrote a book for O'Reilly.

I'm putting it here just to show it's real, it's an actual book.

You don't have to buy it, although if you want to, that's dynamite.

Kind of when I talk about myself as a performance, I hate the word evangelist and I hate the word guru, it doesn't really leave you much, and I hate the word expert as well, so if somebody's got a word that is just like an enthusiast, like somebody who just focuses on this topic, that's me. I kind of consider myself, I'm not a developer, I'm not a designer, I'm not an engineer, I'm kinda like a performance care bear.

I just like walk around and I research and then I sort of shoot user experience and performance care ring out into the world and I hope that some of it lands on people and sticks. Why care? Why should we care about how fast pages render? Why isn't four, five, six, seven, eight seconds just fast enough? Why shouldn't people just suck it up and get over it? The web is slow, let's go do something else. I won't read this but I really love this quote. It was a reader comment from a New York Times article that you can look up, the title's at the bottom. I don't know if the comments are still there. I kinda grabbed this quite a while ago.

When the idea of performance, web performance being meaningful and something people should care about sort of hit the mainstream several years ago, the New York Times picked it up and everybody that worked in my industry was like, oh my god, the New York Times is writing about web performance, it's amazing. I think their writing about it is real, it's a real thing that people should care about. The comments, you know, I don't know who here reads comments, I do, sometimes to my chagrin.

The comments were kind of the gamut of like, yeah, I hate slow web pages too.

They really suck, they make me angry.

To people who kinda say, people need to chill out. Where does this sense of entitlement come from that you expect the internet to be fast? It's not all about you, you don't get what you want when you want it. I've kind of pulled this, this is my quote of all of this, of that particular collection of quotes.

I use this to say that performance isn't just about having a sense of entitlement, that there are actually hardwired reasons that have to do with at a neurological level with how we perceive the pages is loading and how our brains, this is not something we have control over, how our brains actually are hardwired to stutter whenever things that we are engaged in delay.

Human beings, we've been around for a long time. Hundreds of thousands of years.

We've done things like hunting, I'm not an expert on what we used to hunt.

I don't know, mastodons? Let's say mastodons.

Hunting mastodons is the other.

Cooking the mastodons over at open fire.

Gathering berries and shoots so that we could eat them. Drawing on the insides of caves.

The thing that all these activities have in common is that they are sequential flow activities.

You do them without any interruption.

And computers came along, 40 years ago, 50 years ago, as we kind of know them today.

That's the first time in our entire massive amount of history that we've been given these devices that don't actually operate, that we can't interact with in this seamless sequential flow state, and it's really, really hard. Here's an interesting thing.

The average web user, this is according to a survey that was done a few years ago, believes that they wait a total of two days a year waiting for pages to load. Is that actually true? Probably not.

That comes to about nine minutes a day and I don't think I wait nine minutes a day waiting for pages to load, and I use the internet a lot. The fact is this is a feeling that people have. It's a perception that they carry around with them. It's something that they take into every interaction that they have online.

They perceive your site is being slow almost before they've even experienced it.

They expect the internet to be slow.

Are they happy about it? No.

Web stress is a term that was coined a few years ago. I didn't coin it, I wish I had.

This is a study that was done by CA Technologies in 2011. They actually hooked EEG headsets to people and gave them tasks to do on desktop, the computers. Didn't tell them what they were doing but gave them these tasks and then artificially throttled the experience that people were having.

Some people got just slower experience overall, not by a huge margin, by like maybe a second or two per page.

What they found is the spikes in the concentration whenever pages slowed down, people had to work 50% harder to stay on task. They had to concentrate 50% harder.

I don't know if there are any neurologists in the room or neuroscientists in the room or people who are just neuroscience enthusiasts in the room, but every time you need to increase your level of concentration or make more choices about do I stay on this page, do I not stay on this page, your brain uses something called glucose to fuel itself.

And you're using that at an increasingly high rate whenever you are frustrated, whenever you're exposed to slow web pages.

I was working for a company called Radware a few years ago, working with them on another neuroscientific study where we hooked EEG headsets up to people.

This was inspired by the CA Technologies study. We wanted to measure, could we replicate the results that CA found for mobile users? The reasons why we did this was because we wanted to kind of put to rest this idea that, well, people should expect mobile sites to be slower. People just have different set of expectations around a different device.

What we found, oops, sorry, wait a second.

My slides are out of order.

I'll get to that in a second.

First let's talk about this.

Rage clicks.

Something that you should know about where it's a series of clicks.

People are just pounding your pages.

This is something you can actually measure and I invite you to check out this blog post that talked about this in more detail.

Phone rage, I did coin this phrase a few years ago where I was looking for some stats on how people react to slow sites.

What's really interesting is that we have 62% of people said that they behave more or less normally whenever they encounter a slow site. But all the rest of those people did things that might be considered unseemly behaviours ranging from throwing their phones, cursing at their phones, screaming at their phones.

All of these things, people admitted to doing whenever they're confronted with slow sites on mobile. I just like this stock image a lot because it's not just about the guy in the foreground just throwing his phone even though he's in a like bullpen office base and he clearly doesn't care if he's like nailing somebody's whiteboard or taking out a window. It's the woman in the background who just does not care. She's just like, oh, there goes Barry again throwing his phone, it must be Wednesday.

Slow websites and apps, they suck.

They suck for your visitors and they suck for your business.

This is a chart that some of you may have encountered before if you've been looking at performance as it correlates to user engagement business metrics.

This is a histogram that shows all of a site's traffic broken out into these blue bars or cohorts of different page render times.

You can ignore the orange and green bars and just focus on the pink one.

As you can see, most people are getting pages that load kind of between one and six seconds. What you can see is that bounce rate goes up. There's a lot of different versions of this chart that are available out there with different monitoring tools.

What you need to take away from this is just that this is pretty much, I could grab user data from pretty much any site out there and replicate, maybe your site, and replicate this kind of chart fairly easily. It's incredibly consistent.

In fact, I can't even think of the last time I saw a chart where bounce rate flat lined or went down as pages got slower.

Consistently, pages get slower, bounce rate increases. If you have the right data, you can generate these kinds of charts for conversion rate, revenue, all kinds of other metrics.

These are some cases studies.

I won't read them out to you.

These are big companies but I worked with smaller companies that again when they monitor their real users and they do these correlations, they find that making pages faster or slower by a second, sometimes like less than a second, half a second, 100 millisecond can actually significantly affect how much people use their site, how much people buy from their site and overall revenue. If you're interested, you can look up these stats and a lot more case studies at wpostats.com. If you have a great case study that you'd like to recommend that you don't see there, there's a link to do that through GitHub.

Just some other areas of impact.

I'm not sure how well you can see this at the back. This is a study that I was involved with several years ago where we measured the impact, we did something similar to the neuroscience studies where we didn't strap EEG headsets onto people, but we did artificially throttle the experience they were having.

We measured their engagement with one particular site over the course of 18 weeks.

For 12 weeks, we broke the users out into different cohorts of people who had like an optimised experience versus people who experience 500 millisecond delay and 1,000 millisecond delays.

We ended the experiment at 12 weeks.

We ended the throttling at 12 weeks but we continued the experiment for six more weeks where we just wanted to see how did all these people return to the site what were there return rates? What was really interesting was that the groups that had received the throttled experience returned to the site at a much, basically their retention rate dropped.

Even though the site sped up again and they were all getting the same optimised experience once more, they did not return at the same rate.

A more interesting study would have been if we could have just continued to monitor this in perpetuity and see if those lines ever did re-converge but we ended it at 18 weeks.

Here we go, this is the EEG study.

Just a little bit more about this.

We had people participate in a few different tasks on eCommerce sites.

One was a shopping site and one was a travel site to book travel and to make purchases.

They thought they were just participating in a simple usability study.

They didn't know that half of them were getting throttled experiences and half of them weren't.

As they were leaving the study, we did the standard exit interviews with them and asked them what were your impressions of the site? We took all of the adjectives that they gave to us in the interviews and we dumped them into two different buckets. One for the slow group, one for the faster group. We generated word clouds with the words that popped up the most often.

A few interesting things to notice here.

First thing that jumps out at me is the slow group had a lot more to say.

They wanted to talk more about the site and again they didn't realise that they were in the slow group. They were getting the same site doing the same transactions as the fast group. The slow for them again was only a couple seconds slower per page, if that sometimes.

They noticed that things are slow and they had more to say about it.

But what was really interesting as well was that they didn't just notice that the site were slow, they also thought it was boring, it was hard to navigate, it was complicated, it was inelegant, it was frustrating, confusing, it was clunky.

These aren't performance words.

These are brand words.

These are words that actually speak more to design and navigation and content and somehow even though all other things being equal, the only thing that was different between their experience and the fast group's experience was speed, that their negative perception of the site based on the fact that they perceived it as slower affected how they felt about the brand, about the content, about the design, et cetera. This is usability test.

Everybody's got negative things to say.

The fast group had some negative things to say as well, but not nearly to the same extent.

Yes, slow websites and apps, they suck.

We know this, we know it intuitively, the research tells us this, we get into this big hairy problem though where we're tasked with trying to define slow or how fast is fast enough.

And define suck.

It's really easy to say, the site's too slow, it sucks. What does that actually mean? Is it measurable? Can you quantify it? Can you quantify it at scale and can you do this in a way where you can extract something meaningful for your business, for yourself and your day to day job if your job is to optimise your pages.

How do you know what you're optimising for and what your goals are? I love this quote, it's from Steve Souders I don't just love it 'cause I work with Steve now at SpeedCurve.

I've been using this quote in talks for quite awhile and I'm gonna read it out loud 'cause I just love it.

"The real thing we're after is to create a user experience "that people love and they feel is fast.

"And so we might be front-end engineers, "we might be developer, we might be ops, "but what we really are is perception brokers." I really like that.

A few of the other talks yesterday and today that really resonated with me were talks that were about humanising technology, realising that it's not just a bunch of pixels and page or a lot of junk that goes kind of behind the browser, that at the end of the day, we're people building systems that have to be used by other people. There's this feedback loop and the machines are just intermediary.

The software is just an intermediary.

Ultimately, it's about people and people and how do we as people try to understand how other people perceive our site.

There's a massive empathy exercise.

Empathy is really, really hard.

Empathy, one-on-one, is really hard.

Trying to scale empathy is almost impossible. How do we measure perception at scale? Maybe a little show of hands.

How many people here have, and it's okay to put your hand up, it's not a shaming exercise or anything like that, I promise.

How many people here have used one or more of these different performance metrics as a stand in for trying to understand user experience, how people engage with your site? Yeah, me too.

There's a lot of different metrics out there and finding the right one is really, really hard and that's what I'm gonna talk about for the next little bit.

I'm gonna kinda focus on the ones that are in yellow. Not that that matters at this point.

Let's go back for a second.

There are a lot of different metrics here.

Looking at all these, like what, there's 20 of them, which one comes closest to you? Sorry, actually we will focus on these yellow ones for a second; speed index, visually complete, page load, and start render.

Those are kind of the four that I probably encounter the most in people trying to use these as a fill for user experience.

Here's a chart that shows all four of those metrics. Sorry, three of them, three out of four of them. That backend time, backend time's a terrible measure of user experience. It's really important for other things.

I should actually say all those metrics on the previous slide, they're good metrics.

They're just not good metrics for user experience. I'm not telling you not to measure those things. I'm just saying don't conflate them with user perception. Looking at this particular slide, this is a filmstrip view of page rendering that I generated using SpeedCurve.

What's interesting here is you can see that sometime between 11 seconds and 12 seconds, the page renders.

Looking at all these different metrics, I just focused on backend, start render, page load, and speed index.

For those who aren't familiar with these, I'll be talking about them more in a minute and explaining what they are.

None of those actually is at the 11, between 11 and 12 seconds.

There's something broken in the way that we are currently or that many of us are currently using metrics to correlate to user experience.

Let's talk about those metrics for a second. Load time.

Load time is kind of, just a quick history lesson, it's kind of the first metric that people use to try to understand how people experienced sites and it's been around for quite a long time. When I first got involved in the performance space almost 10 years ago, that is all everybody talked about, load time, because it was there, it was easily measurable in that browser.

Synthetic tools that existed at that time like WebPageTest which I'll be talking more about in a moment could extract it and show you what your load time was really easily, so it was very accessible, which is great. The problem is over time, as pages got more and more stuff on them, more they became longer, they have more below the fold content, they have more kind of behind the scenes content in the form of scripts for things that aren't even visible to users. You could have a page that renders in three or four seconds and could have a load time of 30 seconds.

Amazon is actually a classic example of this. It's not a criticism.

They do a really great job of making pages that appear fast to users but actually have a lot of stuff on them.

Load time, not a great metric anymore.

When pages were simpler, it was closer, but not anymore.

Start render.

Start render sounds like a great metric.

It's okay-ish.

Again, it's measurable using tools like WebPageTest. If you're not already using WebPageTest, it's a free online tool.

It's supported by Google.

It was actually developed independently and then Google kind of essentially bought it. They hired the guy, Pat Meenan, who made it and now he works for them and he just works on WebPageTest making it better and better.

It's a free tool.

It's a really great entry point to synthetic monitoring if you're not doing it already.

SpeedCurve is built on WebPageTest.

We love it a lot.

It's pretty widely touted in the industry.

Start render sounds really great because like when things are starting to render in the browser, that's great.

Doesn't that sound like something that should correlate to what user see? It doesn't though.

Just because something has started to render doesn't mean that people can actually see anything. It doesn't mean that it's the most important thing that people care about.

The problem with start render is it's great to let you know, okay great, something's showing up on the page.

That's awesome, you need to know that.

But it values every pixel equally so it doesn't differentiate between a pixel of something that your user cares about versus a pixel that's like just part of an ad or something like that that your advertisers care about but maybe not your users.

Speed index is a metric that's actually developed within WebPageTest. WebPageTest is the only place to get it.

I like speed index.

A lot of people like speed index.

It's the first metric that actually really tried to make an attempt to measure what people actually see on the page. When good visual content renders within the viewport.

It's a good metric.

It's not all the way there yet because again it doesn't differentiate between pixels.

It's sort of the same problem.

But definitely a step up, a good evolutionary step. Then visually complete.

Visually complete which a lot of people have used as a fill for user experience is when visual progress reaches 100.

So basically the visual assets on your page are rendered and they're there and you think, oh okay, that's great because if my visual content has rendered that means people can use my site.

So it's interactive.

The problem with visually complete and I know this from having looked at a lot of different filmstrips is it's actually too slow.

Whereas some of the other metrics kinda show you stuff a little prematurely early and give you a false sense of how quickly users can engage with your page, visually complete is kind of in the opposite end of the spectrum.

When all the visual assets have rendered, people can interact with the page before that. There's some emerging metrics that some people here might be familiar with. Time to first meaningful paint or time to interact. It's interesting and I love the fact that people are out there exploring, building new metrics like this.

We're still exploring at SpeedCurve.

We're doing a lot of research to actually see if these are meaningful metrics, if these are things that we should be adding to what we measure.

For me, it's a little bit too early to say that they are going to be helpful or not.

Definitely some more research needs to happen around these ones.

We need in case it's not clear yet from what I've been saying, really we need better metrics for measuring user experience.

These to me, these are the, trifecta is three, what's four? Fourfecta? It's just four things that you need to have that a good user experience metrics should do for you. It should correlate what users see in the browser. It should allow you to customise what you measure on specific pages.

You know your site better than anybody else and the way that visual assets appear at your page aren't the same as they would appear on another site. Every page is different, every site is different. You know your site better than anybody, so you should be able to customise what your important metrics are.

Recognises that not all pixels and page elements are equal. Some are definitely more equal than others. A great metric should be as easy to use as possible. It should be accessible and easy to manage and maintain, and easy to communicate to other people in your team. These all sound really logical and straightforward whenever you hear them, but it's actually really, really hard to do this. There's a reason why we're still kind of striving to get this perfect metric.

You call it a unicorn metric.

This is what we're looking for.

I was asked a few years ago.

What's the one metric I should care about? Like, what's the unicorn metric? The fact is there is no unicorn metric.

This as an aside is if you Google most beautiful unicorn in the world, this is the picture that comes up.

Behold, the most beautiful unicorn in the world. I wish that there are unicorn metric for user experience although maybe not 'cause then I don't have a job anymore. This is kind of what we got.

Unicorns don't exist.

Donkeys exist and plungers exist though, so we can make our own unicorn, and it's adorable. That was my way of introducing custom metrics. Another show of hands.

I hope everybody is like limbering up.

Who here has used or is familiar with custom metrics based on the, okay great, based on the user timing spec? I included a link to the user timing spec.

It's from the W3C group, so it's really well documented.

What user timing is, there's a blog post if you wanna read more about it and how it actually works in the real world and what's some good use cases are for it.

What custom metrics are is basically it's a set of marks and measures that you make in the code on your pages around specific elements that you care about measuring. In this case for example, if the question you're asking is how long does it take to display the main product image on my product page? You will create custom marks and measures around that element on the page and then because the user timing spec is supported by real user monitoring tools and synthetic tools, you'll be able to actually measure that and get results using the tools that you use.

It's a really great thing.

The great thing about custom timers are that they're highly customizable, which is great. You can measure exactly what matters to you. You know your site as I said better than anyone else and you know your pages better than anyone else. If you know that you care about when does that first ad render because ad revenue matters to you 'cause you run a media site. Or when your product images render because you know that how quickly a product image renders really helps people perceive that product page is being fast and helps them to stay on the page and they're more likely to buy.

You can measure that.

So, ticking off the boxes, correlates to what users see on the browser, yes. Recognises that not all pixels are created equal, yes. Allows you to customise what you measure on your pages, yes. It's easy to use and accessible, ahh, no.

In an ideal world, I could have teach you things.

I go around and tell people, I preach the mantra of use custom timers.

Custom timers are the best.

Please use custom timers.

We wish everybody would use custom timers.

Actually only 15% of people use custom timers according to the HTTP Archive.

85% obviously don't.

While I would love to change this behaviour, I'm also a realist and realise that we need to create other fills for custom timers.

Like I said, it's really hard.

We're just taking it over, flipping the donkey in another direction, and trying to do this for people.

Again, the best metric is a metric you create yourself. The second best metric is the metric that tries to do that for you.

That's where hero times come in, also know as hero rendering times.

What hero rendering times are are a set of metrics that are based on different elements that we call hero elements on your page.

They are your H1, your largest image, your largest background image. Then there's another piece called hero element timing that I'm not gonna dwell in too much today. But the three big ones are H1, largest image and largest background image. As I said, it's kind of a fill for custom metrics because we realise that what most people tend to measure on their pages when they do custom timers is they start by looking at feature images, hero images, product images, and those tend to be the largest image or sometimes the largest background image depending on the site, and again, every site is different. H1 is also a big one for media sites.

When that headline pops up on the page, it usually have H1 tags.

Like I said, it's not a perfect metric but we're kinda playing with usually here.

We can out of the box create metrics that correlate with three things we know most sites are doing, at least one of.

You can go to this blog post to read more about it there, how they work.

But just to kind of walk you through this, here's some more filmstrips.

This is the H1 render for, which site is this? I believe this is Etsy.

As you can see, we're looking at a lot of different metrics here and the one here that correlates with content significantly rendering is H1 render. The nice thing about hero render times is that you can figure out which are the ones that actually sort of correlate to user experience on your pages and actually focus on those ones. In this case, it's H1 render.

From this site which is Home Depot, largest image render, at least for this group of pages, is the one that correlates to what people actually see on the page.

Then for this site which is Amazon? Yes.

It's the largest background image render.

Again, knowing that you have this family of metrics to choose from and then cherry picking the one that actually works for you. It's not custom timers but it's so far the next best thing.

As I said, you can read more about it here on this blog post.

Again, I wanna ask why should you care? If you were at Chris Messina's keynote yesterday which was awesome, he talked for a bit about children, about kids, about how kids are the future, kids use the web in ways we can only begin to understand and they're growing up in world with a web and a comfort level with the web that is very, very different from our experience. In a way, we're designing the web for them. We don't even really quite know what they want. Flipping out around, this is not a child, this is a senior, I wanna invite people to remember and I hope I'm not the first person to tell you this, someday you're gonna get old.

I know.

As we get older, the way you use the internet and the way you perceive the internet is going to change and we have no control over this.

It's like the neuroscientific research that I was talking about earlier.

You can tell yourself, no, no, that won't be me, but the fact is that we are hardwired in a lot of ways to our brain just changes as we get older.

65% of seniors use the internet.

People aged 65 and older are 43% slower at using the web than people aged 21 to 55. I just like to remind you, I mean maybe if you're under 30, 65 seems like really far off to you.

For me, it isn't like, I can kinda see 65.

It's like it's sort of on the horizon.

I can definitely see it.

These are people who are unbelievably decrepit and complete dinosaurs when it comes to using the web. These are people who've been using the web for 10, 20 years.

They're still slower.

The reason why they're slower is because between the ages of 25 and 60, our ability to use the web declines by almost 1% a year. How many people in this room are under the age of 25? All right.

Your turn is coming.

For the rest of us, and I definitely feel this, this decline is already happening.

It's happened.

It's ongoing.

You can do Sudoku all you want, it maybe help a little bit. It is nice, it's just a fun thing to do.

You should just do Sudoku.

This is the inevitable.

This is going to happen.

I just wanna kinda put out there that this decline in cognitive function, I mean we make up for in another ways.

Sorry, I'm gonna say something which 'cause I'm very aware that yeah.

We make up for this decline in other ways.

Just the birth of experience and we develop life hacks over time and memory hacks so that we can kind of use the gifts that we accumulated over time to make up for the deficits that we are also accumulating. But this is a reality.

We're not just designing the web for kids to grow up and use.

We're designing the web for ourselves to use. Maybe you don't like kids and it's hard for you to empathise with their way of using the web, that's fine. But you can't change the fact that if you're lucky, you're gonna get old.

You're designing the web for yourself and creating a culture, a caring about how the web works for everyone all the way up to the very, very, very old. Yeah, so, arg.

Some takeaways here.

Performance, web performance is user experience. I tend to think of the pillars of the web to me. If you're building a better web and I hope that everybody's goal is to build to help to be one part of a group effort, a massive effort, to build the best possible web.

It's about making the web fast.

It's about making the web available.

It's about making the web secure.

It's about making the web accessible.

And performance is a big part of that.

It is user experience.

There is no unicorn metric, we've been over that. You can't understand what you don't measure. That's really obvious.

You wouldn't build a house without knowing what size cuts to make, at least I wouldn't, I hope you don't.

At the same time, you can't measure what you don't understand, so you can't kind of just lock yourself in a room with some monitoring tools and just look at your data and not look at your site and think that you understand how your site performs. If you actually wanna measure your site in a meaningful way, you actually have to really know your site. The people that I know who are doing the best and most interesting work and the most impactful work on helping their companies build better sites, they don't just go away and work with their monitoring tools.

They try to understand the site itself and the pages themselves and users so that they can understand all those things so they know what to measure and what to target. So they're not just kind of picking some random metric that they care about or that it's like well, I know I can move the needle easily on load times so I'm just gonna focus on that. And it doesn't actually change anything for the business or change anything for end users.

So you really do need to have that understanding both ways. Kind of leading to the next point.

Performance is a group effort.

It's a team effort.

If you are just a performance engineer, sorry, just a performance engineer, you're a performance engineer who works just with other performance engineers without talking to people in marketing and sales and product design and other people throughout the organisation who also work on the site, it's like the analogy of the, the five blind men, seven blind men all touching an elephant and everybody's got a hold of a different part of it and one person thinks that they're touching a snake and one person thinks they're touching something else. But at the end of the day, they're all touching the same thing but they don't actually compare their stories and realise that they're all actually working together on something.

Going back to the case studies that I talked about at the beginning of this talk, even really small changes can make a big difference. I know a developer, she's a freelancer and she knows a lot about performance.

When she starts a new contract, the first thing she does is just goes and she plucks a low hanging fruit.

Usually it's image optimization, looking at third parties, blocking JavaScript. Just really easy fixes that if you know what to look for you can go in and clean out those things on your pages, make the pages noticeably faster.

And in her case, she told me, she's like, I do these things right away, like as soon as they give me permission to start touching things, I fix these things. Instant rockstar status.

Then I could do whatever I want.

It's instant credibility.

Those small changes can make a really big difference. I shared a case study from Fanatics which in the US is a sports clothing retailer. All they did was they got rid of some images that were blocking the page from rendering and they cleaned up a bit of code, they really didn't do very much.

They cut their mobile load times like by half. They increased conversions by like quite a significant margin.

That was amazing and that was just from doing two things. You can do more than two things.

Thank you very much for having me.

If you have any questions, feel free to reach me on my Twitter.

You can email me.

My slides are online.

I uploaded them before this talk so you can go find them there if you're interested. I think we've got five minutes for questions. (lively music)