(audience applauding) - Thank you so much.
Yeah, I was at this event last year and, genuinely, it's one of the best events, probably the best event, in terms of sheer quality of content, I have ever been to.
So it's great to be back here for a second year. Now this talk I'm giving this morning is slightly different to my usual tact.
Normally, what I would do is give a very, sort of, objective, very practical kind of talk.
Usually fairly technical.
But today I'm gonna be looking at the business of web performance.
It's gonna be looking at, kind of, a fairly candid, frank and very honest look at the business of Webperf. How do clients translate milliseconds of improvement into millions of Euro in increased revenue? And how does that work for me as a self-employed consultant? How do we go about pricing projects, defining projects, achieving goals, maintaining them and keeping things on track? So I'm Harry, I am a self-employed consultant performance engineer, which is something I basically started calling myself a couple of years ago.
It's amazing how that works.
Pick a job title and people start hiring you for that thing. But I am self-employed so, for me, it's quite an interesting thing working out not just how to deliver the work, but how to actually spin that out into a full engagement, how to get the client onboard, how to convince them of the value of performance and kind of hold their hand through the entire process. I tend to work for large clients.
I'm not a diva, I'll work with any company who wants to work together and do good stuff but generally, it's large companies who will feel the benefit of web performance and more than smaller ones.
Obviously, web performance benefits absolutely everyone from business to consumer, from client to customer, but mainly it's companies like this who will hire me to help them have more robust and reliable websites, ultimately delivering better experiences for their users, customers, or in the case of, perhaps, the NHS, even patients, right? That's not always been the case.
My background is actually in CSS architecture and sort of design systems.
But a couple of years ago, very deliberately, I decided to switch towards webperf.
Lots of different reasons why I did that and I won't go into any of those during this talk. But a few years ago, I very deliberately decided that web performance was going to be my thing, what I want to focus on.
And it's taught me a lot about a different side of industry. And I said this is gonna quite a frank and very honest talk, the biggest thing it's taught me is there is just way more value in web performance, right? My clients understand that web performance is way more closely linked to their revenues, to their bottom line. That's not to say that design systems, CSS architecture aren't valuable endeavors, they absolutely are but the kind of client who would commission a design system project is gonna be quite a visionary client.
Somebody who's looking for two to five years of cost savings perhaps.
Whereas webperf is much more immediate returns and because of this, I end up getting this, kind of, not bizarre but very different situation where I end up in front of C-level people all of a sudden. I have to have meetings with CFOs, CTOs, sometimes even CEOs and it's a completely different world for me. So I wanna talk about how that looks and how it came about, and how as a self-employed person, I kind of navigate these waters.
The key thing for me, the most important thing, where it all starts is asking the right questions. I used to really struggle with this.
I was very, very naive in the beginning of my career. I would assume that, as the consultant, I'm meant to have all the answers, and it would be very debilitating, right? It would be almost paralyzing.
I would be in a client meeting, scared to ask them questions because I'm thinking, "Well, I can't...
"If I ask them a question, they will know that I don't know "everything and that's gonna sort of, I don't know, out me. "It's gonna make me look like an imposter." I've actually got a very prized GitHub repository full of all my best ever questions I've ever asked because I find that, now as a consultant, what I can do is ask very simple questions or on the face of it, very simple questions that are very introspective, that make clients really, really think about why they hired me, why they exist and what their business wants to achieve.
I absolutely do not have all of the answers and it was a really, like I say, really paralyzing mindset that I used to have where I believed that I was meant to have all the answers.
I've got a lot of the technical answers but there are some things that I just could not possibly know without asking, so I had to get very comfortable with almost appearing, not vulnerable, that's kind of the wrong word to use, but, being comfortable with the fact that I need to lean on the client as much as they need to lean on me. I'm not gonna show you my GitHub repository because it is very prized, very secret, but some of the key questions that pertain to this talk; "How do you know the website is slow?" A very, very simple question that actually can have really, really revealing answers. If they just tell me, "Well, it just feels slow," it's a completely valid answer but it's a very emotional response to the problem. What this tells me is perhaps they haven't gauged the actual impact of a slow website.
They haven't gauged the value of making it faster. Conversely, if they tell me that, "Well our customers keep complaining.
"Support (mumbles) all the time tell us "that the website is slow," they probably do have a way more business-facing understanding of the problem. The next one, "What key areas of the site should I look at?" Again, on the face of it, a very simple question but this time it's actually very simple answer. Just what should I look at, tell me what to do. A problem I used to make all the time is I would sit down and look at a site with potentially tens of thousands of pages, dozens of different templates, and try and just look at every one of them to work out what the problem is.
Now I just get the client to tell me, "Look, what are some key pages? "What are some key user journeys? "Which key geographic locales do we need to look at?" I get the client to deliver that to me on a plate so I can start working.
"What will being faster mean for the business?" Because, kind of, we're gonna look at it in the next section but none of my clients actually, truly hire me to make their site faster.
Nobody wants a faster website.
What do you really want? Do you want more engagement? Do you want to increase revenues? And finally, how are you gonna measure that? Do you have quarterly revenue goals for E-com? Do you have to increase conversions by X percent to hit some targets? How do we know how to measure this? Which basically tells me when we've won.
When do we stop, right? When do we stop the project? 'Cause we need to know that.
We need an end in sight.
So yeah, controversial thing to tell a room full of performance engineers, I guess, but nobody really wants a faster website.
If they genuinely wanted the fastest website, that's dead easy.
My job is to actually fit in to a much broader picture. Performance is just one small slice of a successful business.
They need a product that people want to buy, right? That's a key thing, right? You can't make a terrible product, the website fast, and suddenly make more money.
The site needs to be well designed, it needs to be effective and it needs to be fast. What my clients really need is the most effective website and there can be many motives behind this effectiveness. Generally speaking, web performance is a for-profit industry so we're gonna be looking at things like revenue, conversions.
But there can be many different motives for this. See a few people taking pictures of the slides, I'll give you a second before I switch over. Do they wanna increase revenue? I've got no problem with my client telling me, simply, "We want to make more money." That's fine, I'll help you do that.
It could be improved engagement.
I work with, or I have worked with, video streaming clients who, they're not necessarily E-com per se but what they want is engagement.
They want people to watch more minutes of content, to stay engaged with the site.
I work a lot in publishing, so time on-site and bounce rate are important metrics to improve. Good case study of this from this year, actually, from about maybe six months ago.
A client emailed me, they're a newspaper in North America. Basically, this is heavily paraphrased, the newspaper basically said this, "We want to make the site faster but we want to run the "same number of ads." They were very frank about the fact that they make a lot of money off of advertising, but they were worried they're hitting a kind of point of diminishing returns.
What they were basically trying to get me to find out is what is this inflection point.
Obviously, the more ads you put on a website, the more money the website will make but you're gonna push it too far at some point where the amount of ads will affect engagement, and engagement will begin to, kind of decrease. They hired me to find this sweet spot on the bell curve. What is the best, most optimal amount of ads delivered in the most optimum speed to make the optimum amount of money? They were basically asking me, "How much can we get away with? "How far can we push this? "What can we get away with?" And, you know, ads are a necessity on the web, right? We need them because we've failed to monetize the web in other ways, advertising is a necessary evil, so this is a completely valid project for me to work on and finding that inflection point was fascinating, and indeed it was a very successful project. They told me what to do, right? Having asked the right questions, having gauged what they actually wanted, I was able to not focus on ad providers at all. Didn't remove any of their ad revenue streams. They told me what to focus on.
We made the site about 6.2 to 6.4 seconds faster on average on mobile, right? Without touching a single source of revenue. Obviously that's gonna translate into a huge uptick in profits for them.
Next thing, you can't fix what you can't measure and it's kind of redundant, again, me telling a room full of performance engineers that you need to measure things.
The majority of my job and probably most people in here's job is actually measuring stuff, gathering data, comparing and contrasting, assessing and analyzing numbers. Gather data, spend a long time doing it.
Gather as much data as you possibly can and when you think you've got enough, go and get a bit more.
The only regrets I've ever had in my performance work is not capturing quite enough data.
I've never regretted having too much.
I've always missed out on, "Oh, if we'd have just captured "that before we started" or if only had access to this information.
Gathering data can take a number of different guises but the usual thing that I will defer to, the simplest, lowest common denominator is imperfect, but it's Google Analytics.
Nearly every client I've ever worked with has a Google Analytics account.
Google Analytics is not perfect though, huge word of warning.
A lot of the data it gives you needs to be taken with a pinch of salt.
In fact, I've (murmurs) Simon's got a great article about how Analytics is kind of crude and not very accurate but it's a good start.
But what I wanna do is warn you about some of the pitfalls with using Google Analytics for performance data. It gives you average values, and by average it means the mathematical mean. And the mean is useless in web performance. We want percentiles, 50th, 75th, 95th.
It only samples one percent of your traffic, that's not great.
If you've got millions and millions of visitors, one percent may be adequate, but for a lot of medium sized websites, one percent of traffic...
Sorry, one percent of your traffic's performance data is probably not enough to work with.
It's very coarse.
It aggregates every device from every country, every browser in the world on every page on your site and then averages those out.
You can drill down further but by default, it's very coarse in view of the data.
And finally, it focuses on load times which are considered a legacy metric.
It's not terrible but we wanna be able to (mumbles) things that are way more pertinent to the user.
But that said, any data is better than none, and my favorite ever case study around Google Analytics happened to me last year. I was hired by a company based in Tallinn, Estonia. Tallinn is beautiful, it's a very technologically capable city.
Well, the entire country of Estonia, very technologically capable.
They've done everything online for the past 15 years by default, it's a very, very fascinating country.
And a client out there...
I can't tell you who the client is, right? I can't tell you what they do but what I can tell you is they do deal in Bitcoin, which sounds dodgy as hell, I realize, but they're basically, they deal in Bitcoin.
They hired me because their website was slow, so I went out to Tallinn...
But one of the first things I did, sorry, before I got there, was I looked at their Google Analytics data.
"Can I look at your analytics?" And I found a really interesting thing going on in Venezuela.
In fact, let's go to Venezuela.
So what I did is I sat down with their CEO and I simply asked him, "Do you know about your Venezuela problem?" And I, on purpose, asked this vague question because what I wanted was his, kind of, like, "What?" I was like, "Exactly! "Look at this!" So when we started working together, when I asked the client, "Which geographic locale shall I focus on?" They told me, "Southeast Asia." Now, if you look at Southeast Asia here, it's very dark blue.
Load times are bad in Southeast Asia.
That's why they wanted me to focus on that. But interestingly, load times are terrible everywhere. Before I started working with this client, the global average load time was 16 point something seconds. So they told me, "Focus on Southeast Asia.
"It's an important market to us "and the website is slow there." But I kind of went off-piste, I went all the way west because I saw this view, this is a really fun, interesting view in Analytics. I went west, very far west, because Venezuela caught my interest.
Why is Venezuela so dark? And why not Colombia? Why Venezuela but not Costa Rica? Why Venezuela but not Mexico? Why is Venezuela so dark? So I did some kind of investigative journalism kinda thing and what I found out was absolutely fascinating. The first thing I found out, I already knew, but Venezuela's economy is currently tanked. It's unfortunate but that's the case.
In 2018, hyperinflation hit a million percent, right? So they're not doing too well.
As a result, they wanted to get their hands on any not Venezuelan currency.
By default, that would be US dollars but Bitcoin counts as a not Venezuelan currency. Second thing, in a bid to try and recover the economy, the Venezuelan government minted their own cryptocurrency called Petro.
Terrible idea, it's not gonna work, but still you've got an entire country that knows what crypto is.
You go to the average person in the street in the UK, they don't know what Bitcoin is.
Never heard of it.
But all of a sudden, the entire country of Venezuela knows what cryptocurrency is.
The third and most fascinating piece of the puzzle; electricity is free in Venezuela.
They don't pay for electric.
They leave their machinery on overnight, they wake up, they've made eight pence.
They're printing money.
They know what crypto is and they don't pay for electric, this is a country that can literally generate free money because their electricity is government-subsidized to a point that it's effectively free.
Coupled with hyperinflation, it's actually free. Venezuelans don't pay for electricity.
No wonder their economy's tanked.
I used Tim's World Web Wide to find out what the internet looks and feels like in Venezuela, and the situation isn't great compared to Tallinn, a very digital capable city where all the engineers were based, Venezuela was a very difficult place to be visiting the internet from.
So I went around every engineer's machine and I made their machine run as if it was from Venezuela. I made everyone's machine run as if it was a Venezuelan connection.
I effectively annoyed the engineers into doing a better job. Now this is that Analytics data on a sliding three month scale surrounding our engagement, and the first thing I wanna tell you is that that massive spike in load times was the first day I started working for them, but before you judge me too much, it wasn't my fault.
Do you want me to answer that for you, Patrick? This is the first I had started with working them and global load times went up to a 100 seconds on the first I started working with them, but it wasn't my fault.
It's like, "Hey look, this is why you hired me "because currently, right now, global load times are hitting "a 100 seconds because you've got a "synchronous third-party provider suffering an outage." But that aside, this spike in the graph becomes really useful for me because when I look at their analytics, I can basically see the exact day I started working with them.
On the left, load times are spiking but average out to about 16 to 18 seconds.
Few days after I started working with them, global load time is down to six seconds.
This was a huge win for the business.
This is globally.
Looking at Venezuela alone, that spike is actually now 400 seconds.
To the left of the spike, load times are around 150 to 200 seconds, what you'll find is that to the right hand side of the spike, after our engagement, load times are so quick, they don't even register on this graph anymore because of the scale of the graph, you can't even see, they don't even register.
That means there's traffic coming, right? It's not like there was no traffic, it's just that load times are so fast, you can't even spot them when compared to that 400 second spike.
What did this mean for the business? This was outrageous! They didn't even realize Venezuela was a problem for them. They didn't realize Venezuela was an area of interest. This company made so much money from Venezuela, they had to hire a full-time member of staff just to take care of it all.
They hired a Latin America Client Success Manager just to keep an eye on this Venezuela thing. And I haven't told you this story just to show off, a little bit, but mainly, the point of the story is if you give the right people the right access to the right data, the business impact for that can be tremendous. It could be absolutely enormous.
So I fight very hard to ensure that all the engineers that I'm working with have access to the right tools. Because it wasn't, you know, it didn't take a genius to work this out, it just took the right person with the right access to the right data to completely transform the trajectory for this business.
This client adores me, by the way.
They really, really love me because of this. It's really interesting to work in these kind of environments where you can have a measurable, meaningful impact in that short amount of time. It's good if you're like me and you like instant gratification.
Next bit of advice is follow the numbers.
I get these situations all the time where numbers kind of go against what I would believe to be correct. And kind of, naively, what I used to just think, "Well, I'm correct 'cause I know how this works "so the numbers must be wrong, "something must be wrong here." What I've found is that, majority of cases, the numbers are right.
Numbers very rarely lie.
So what ends up happening is if I ever get to point of contention where something that I think should've worked or something that should've had a certain output, if it looks incorrect, if the numbers suggest it's wrong, my first point of call is to question myself. I always follow the numbers.
I've had it, even just a couple weeks ago, I was building a graph for a client and I was like, "This can't be right, surely the graph is wrong," and a bit more digging, I've learned that, "Yeah, no, I was wrong." So always follow the numbers.
A really good, but very brief case study of this. Last year I was working with Squarespace.
A fairly, kind of, lengthy engagement.
We had a lot of things to fix on the Squarespace website, but one of them was just their head tags.
Anybody who's worked with me or was on my workshop couple of days ago knows that I am obsessed with head tags. Head tags are fascinating and they are vital for performance.
And Squarespace's head tags were just a mess, all right? Legacy site, been around for years and years and years, dozens of people contributing, putting things in anywhere, putting things at the top, at the bottom, in the middle, no one really cared.
One of the tasks I gave them was "(murmurs) just tidy up "and reorder your head tags," right? And there is, this isn't it, but there is a theoretical perfect order for your head tags, theoretical perfect order for performance, so I set them a task to do that.
We deployed it and the site got slower.
And I looked like a bit of an idiot 'cause I was like, "Ah, but this is meant to, you know, this is meant to work." It was like, "Yeah, but it hasn't." "Yeah, but it's meant to work.
"I mean, theoretically this is--" "Yeah, but it hasn't worked.
"We need to reverse it." So in any case, always trust the numbers over yourself or, at least, doubt yourself more than the numbers. Talking about things getting slower can feel counterintuitive but another thing that I found really valuable when working on performance projects is it's really important to move slowly.
Sounds really weird to tell a bunch of performance engineers to do fast things slowly, but it's very, very important.
The first reason is there probably is no rush. If your site is slow, it's probably been slow for weeks or months by now.
There's no need to get it fixed tomorrow, right? There's probably not as much rush as you think there is. Also, the kind of size of clients I work with and my availability as an independent developer, it might take months for me to begin working with them anyway.
From the moment they email me to the moment I'm on site could take up to three months anyway, so use that time.
Use that time to measure things.
Use that time as kind of sort of asynchronous time to just passively gather more data.
I said before how I've never once thought I had enough data. I've always regretted not having enough.
I always wished I had more.
What I can do now is I can spend that three months or however long it is, I can spend as much time as I want, saying, "Well, I don't wanna arrive and just look "at load times from Google Analytics.
"What I would like to know is "what's your first input delay? "I would like to know how quickly "your (murmurs) images render.
"I would like to know what your start render time is. "Let's measure that just passively in the run up "to our engagement, "therefore I'm way better poised to make "the effective change that are needed." Often what happens, or has happened in the past is I've been so excited to get started that I'll tell the client, "Hey, your site's faster!" And they'll say, "Well, how and in what areas?" "I've got a web page test from six months ago, "we could compare it against that." Get a lot of 'Before' data, it's really, really important. The second thing, and this usually comes down to my clients, my clients usually have two temptations.
One is that they get really excited and they're like, "Oh my goodness, we've got CI, we can release several times. "Release everything all the time, release quickly. "We fix it, release it, fix it, release it." Well, the problem with that is you can't actually see, in any reasonable dashboard, what the actual effect of each change was.
Or the other thing they will do is they will just save up all of the changes we make, and then just release them at the end of the project. You know, they've got 72 performance related commits that go live in one, all of a sudden, the site gets faster but we got no idea why, right? Which part of those changes made it faster? Which part of the project was the most effective? So I try and limit my clients to one thematic release per day or whatever their release cycle is. This means that every single day or week or whatever it is, we can see in graphs, in whatever sort of (mumbles) tooling, basically speed curve, that you're using.
We can see the effect of every change.
This gives me two really valuable insights. One, it tells me where to look next time or rather, it tells the client where to look next time. After I've stopped working with them, they need to know that, "Of the last time we ran slowly, "it turned out that optimizing third parties was the "biggest improvement.
"That's what we'll look at next time." Or conversely, and sometimes more importantly, it tells them what to avoid next time.
What they might do is say, "Well, actually, you know, "we laser loaded all the images "and what ended up happening was engagement got worse "because the images took too long to arrive." So that means that the next time they have performance issues, they know what not to focus on.
And this last point brings me onto my next section. One of my favorite soundbites.
Anybody who's worked with me or been in any of my workshops will have heard me saying this all the time: Maximize the work not done.
Work out what not to do.
With any kind of strategy at all, it is just as important to know what not to do.
It's basically, effectively, it's an exercise in efficiency, an exercise in working lazily, being kind of smart with your time, conservative with your time.
I'm not getting paid to do lots of things.
I'm not getting paid to just fill time.
I'm not getting paid to just appear busy.
I'm being paid to be effective.
I shouldn't be around for too long.
What I need to therefore do is drill down to the problem very quickly and maximize the work not done. Quick case study now, a very quick case study. I can't name this client, unfortunately, but looking at this Google Analytics data, we can see that 73% of customers had a sub-three second load time.
22% of customers had a sub-one second load time. I get terrified when I see dashboards like this 'cause this site's already pretty fast, therefore, what am I meant to do? So looking at this, you might think, "Well, that big 51.45%, we need to nudge that big blue bar "into the one second bracket.
"We need to get all those customers into the top bracket "so we're getting more and more users "in this one second boundary.
"This one second bucket, sorry." This is difficult work, moving a 1.5 second load to a one second load or a three second load to a one second load is actually difficult work.
What you're doing here is really picking, sort of, the high hanging fruit.
Corroborating this data against something like speed curve though, we found that most conversions happen at two seconds.
So realistically, what we're looking to do here is solve the long tail, and Tim's written about this extensively. The long tail of web performance is where you stand to make the biggest gains.
So I don't wanna nudge these three second customers into a one second bucket.
I need to try and get these 10 second customers towards five and five toward three.
The good news for me is this is actually easier work, right? Finding out how to get the gains at that long tail could be really, really easy work.
Trying to forensically nudge a 1.5 second load into the 0.9 second bucket is gonna be really difficult by comparison.
Another case study, Vitamix.
Vitamix is probably one of my favorite performance clients just because the amount of work we got done in a short amount of time was really, really fun. We got a load of really important things live in just a matter of weeks.
First thing I do when I start a project is I do some competitor benchmarking.
The main reason I do this is actually just to get the client, kind of, urge to their primitive side, their competitive nature of like, "We need to be faster than them." So I do a load of webpage tests, competitor benchmarking, and what we found here is that my client is the first, well, Vitamix is the fastest load time on mobile, right? Their by f...
Well, not by far, but they're the fastest mobile load time but they're third place for start render.
So in the spirit of maximizing the work not done, I know that I don't need to focus on load times here, right? There's no prize beyond first place.
They're already the fastest load time, I'm not gonna chase anything better than that. We should leave it as it is.
So in the spirit of maximizing work not done, I will ignore load, which is good for me because it's a legacy metric anyway. Don't care about load times.
What we do need to focus on, however, is that start render because every person who ever visited a website was there for content.
Not load times, they were there for content. Whether they're using a screen reader or whether they're using a full feature browser, whatever they're using, they were there for content. So using this, you can proxy some very rough information. I can define the entire project now just by knowing that they're fast to load but slow to render.
The actual technical side of this project becomes very simple.
If you wanna improve load times, you're effectively looking to improve sub-resources, right? Font, images, CSS, etc.
Effectively, and very broadly speaking, if you want to improve load times, you're looking to improve the delivery of sub-resources. If you're looking to improve start render, you're basically looking at the head tags.
Told you, right, obsessed with head tags.
Your head tags are the single biggest render blocking resource on your page. You can't populate the body with any content 'til your head tag is solved, therefore solve your head tags.
If you wanna solve start render, swarm the head tags. This defined the entire project for us.
And I went full out to Cleveland, Ohio in their office. We had a hackathon for a week.
We basically spent a week focusing on the only part of the website a customer never sees.
We spent a week in their head tags and it was absolutely the right thing to do because now, not only are they still first place for load times, they are first place by a long margin for start render. Out of all their competitors, they're the only... Their only website on mobile has a sub-four second start render.
In fact, second place is a 1.5 second delta. Even the person in second place is 1.5 seconds slower than us to start render and, incidentally, load times got better as well. Little tech firm, may have heard of them.
They aren't a client of mine unfortunately but if they're listening, which they probably are, you can hire me if you want, Apple.
For this talk, I did a bit of a demo.
Again in the spirit of maximizing the work not done, I used to have a very inefficient workflow when auditing a client website.
I would just bum around a site, thinking, "Oh, this page feels slow" or, "Maybe this is fast," and I would actually do a lot of tedious manual work to even just find my footing, to even just find what I should be looking at took me a long time.
And then I came up with this idea that, why don't I just look at it all through graphs? This is a very ugly, very colorful, very noisy graph. What I'm gonna do is step through everything with you. So I'll do a thing like this, I'll ask the client, "What key pages shall I look at?" And on the Apple website I chose homepage, a category page, which in this case was iPhone, a product page, which was iPhone 11 Pro, the PDP, product details page, was the purchase page for the 11 Pro, and finally, the search results page.
Now, I don't need to click around at the site. I don't need to look at any dev tools.
Don't need to look at a single line of source code. I don't even need a browser to do the next part of my job. First thing I know is jeez, the PDP needs some attention. Substantially worse than every other page, so pen and paper, actual notepad, I was making note of, "Check out what's going on "on the PDP," right? Next thing, time to first byte is nice and consistent. This is good news, right? This means that, potentially, all of these pages are built with the same CMS or the same tech stack.
All the pages are getting delivered in a very consistent time, therefore one page probably doesn't have anymore erratic or long-running database queries than another. Potentially, any backend optimizations we make on page A will be felt on page B, C, D and E as well, potentially. Next, I'm seeing a pretty huge gap between first paint and first contentful paint.
This is a telltale sign of fonts, right? First contentful paint is the first image or font data that is rendered to screen.
First paint is just the first changed pixel. Knowing there's a delta between these two tells me, and I'll literally write it down in my notepad, "Go and investigate web fonts." If first paint is at this time but first contentful paint is quite a way behind it, we're probably missing web fonts, right? So I'll put that look at web fonts strategy. It turns out it could be images or it could be web fonts. It turns out it was web fonts in this instance. Big gap between first contentful paint and last painted hero.
This tells me that we've got some image problems. First contentful pain is X time but last painted hero is quite a while after, I need to work out how to optimize the delivery and the size of the hero images used on these pages. Don't need to look at a single line of source code to work this out.
Next, we've got a pretty big gap between first contentful paint and speed index, and what this is gonna tell me is that there is some kind of above-the-fold issue, right? Content above the fold is not getting painted to the screen in a timely fashion.
I need to work out what is going on above the fold to make sure that the user is getting constant and meaningful updates.
Next thing is load times are pretty erratic but when your load times are pretty erratic and your time to first byte is fairly constant, what this is telling me is that the html is being delivered in roughly the same amount of time for each of these pages but the big and varying delta tells me that different pages suffer with different levels of sub-resources.
The PDP, for example, is much, much slower to load than the other ones.
However, the PDP has fewer images.
The actual product page has the most (murmurs), kind of dripping, high res imagery.
And finally, and this is what I find really insightful, a very variable start render has a really interesting telltale sign for me.
The first three pages have a very similar start render. The next two pages are much higher.
As we just discussed, your head tags are key to starting rendering, therefore what I'm gonna guess is that the latter two pages have somehow different head tags. If they're taking longer to begin rendering, there's more blocking resources, well, there are more blocking resources.
This telltale sign, I can roughly proxy that the PDP and the search results page are probably built on different tech stacks.
If you've got different head tags, they're probably from different CMSs or template languages or whatever it is, and it turns out, I did verify it, PDP and search results page are built on different tech stacks.
They've got different head tags, they have different amounts of render blocking stuff, so what I need to know now is how can we share resources across these pages? How can we (mumbles) across, sort of, product caching? How can we unify the head tags better? You'd have to work out your backend tech stack from your start render times, I find quite fascinating. Basically, this now defines the bulk of my project. Looking at this graph, I can work out, at least for the first few days, what I need to be focusing on.
All right, next bit.
Who's gonna pay for it? Put your hand up if you've ever struggled to get a client or a manager to spend money on web performance. Yeah.
Me too! How weird is that? They email me, asking, "Hey, can you make the website faster?" I'll be like, "Yeah, sure." And they'll be like, "Why?" "What do you mean why? "You emailed me.
"What do you mean why?" I get it all the time.
I honestly struggle to convince clients to pay me for web performance, even though they got in touch with me. And it's something that I've struggled to reconcile, and the first thing I need to stress is I am not a businessperson.
I am not very good at doing business.
But the best bit of business advice I was given was from a friend of mine, Oliver Reichenstein. He told me years ago, we're talking maybe a decade ago, "Don't do it for the money but never do it for no money." Don't be greedy, right? 'Cause as a consultant, your client will see through that. Don't chase the money.
It will erode trust and trust is probably a consultant's biggest currency, right? It's very important.
However, the second bit is don't do it for no money, and I've got a really bad habit.
Clients email me saying, "Hey, we've got an E-commerce site. "We do this much revenue a year, blah blah blah blah blah." And I'll immediately just pop over their webpage (murmurs), run some tests, and in my first email back to them, "Oh, no wonder it's slow.
"You need to do this, you need to change this, "you need to (mumbles)." And in the first email, I'll give them all the answers. And they're like, "Oh sweet, actually, "we don't need you anymore.
"Cheers buddy." I've honestly done that more than I would care to admit. Don't start emailing me with fake proposals. "Hey Harry." Performance directly affects the bottom line and my clients know this.
That is why they emailed me.
They might not know how much, they might now know how much they stand to make, but my clients all understand there is a vested financial interest in hiring me.
There's a vested financial interest in doing webperf stuff. That means it's not a good time for me to be shy and most people, but especially British people, we're terrible at talking about money.
It's an awkward thing for us.
But you need to be less shy, you need to be less squeamish.
Quick case study here, very quick case study. Client of mine, Trainline, based in the UK, by their own working out, their own data told them if they could make their website just 300 milliseconds faster, customers would spend an extra eight million pounds a year. That's tremendous.
Look at those numbers.
300 milliseconds is worth eight mil a year. This is not a time to get squeamish about money. It's all about cash.
The only reason I've been emailed is 'cause someone's gonna get rich off this.
What I have attempted to do and what I'm doing more of is I'm going towards a more value-based pricing approach, and this is scary...
Any of you do value-based pricing? Couple of people.
It's scary, right, it's a difficult concept to try and get onboard with.
But basically, value-based pricing is defined by Alan Weiss as, "My fee represents my contribution to the project "with a dramatic return on investment for you "and equitable compensation for me." Basically, you pay me what is fair based on how much I'm gonna earn you, right? Don't pay me a day rate.
Fun fact, earlier this year in half a day of my time, I saved a client over a $100000 a year.
It took me half a day to identify a problem that ultimately saved them over $100000 a year, so it's annualized, right? So it's not just once, but every year they're gonna save a 100 grand. Is it fair they pay me for my half day rate? I don't think so.
So value-based pricing becomes a more fair and more, kind of...
And I built this kind of mutual trust where everybody's invested.
I've gotta deliver.
They needed results that we've quantified.
It brings everyone in this more kind of trust and, sort of, equitable basis.
So a lot of clients will ask me, "How much are we likely to make from this project?" And what I find really interesting here is that I used to try and answer this question.
How naive is that? Again, I used to be...
I used to do a lot of very naive things.
Probably still do.
I used to try and answer this question.
"How much are we likely to make from this?" I was like, "How am I meant to know that? "I don't know how much you make currently.
"I don't know what your targets are, "what your projections." I mean imagine, right, if a client emailed me, "Yeah, we wanna make the site faster but how much money do "you think we'll make?" And I'll say, "Well, I'm not sure but Trainline made "an extra eight million pounds a year." And the client's like, "Wicked! "We only make three mil at the moment "so we'd love an extra eight." I've tried to now invert that paradigm.
If I get asked this question or, rather, even if I don't get asked this question, one thing I ask the client is, "How have you calculated the value of this project? "You've emailed me for a reason.
"You know that making your site faster is gonna make you "more money.
"How did you work that out? "How do you know it? "Because I'm not your business analyst, "I can't give you this data.
"Like I said, I've tried before to give vague estimations "but I don't really know.
"I don't know if performance is your biggest bottleneck. "I don't know how much you're projected to make "next year anyway." If I have to tell them, "Oh you're gonna make an extra 1.5 mil," and they're like, "Well, we're projected to make "an extra three anyway so you're gonna lose us money." I'm like, "No, that's not what I meant." I can't have all the answers to this.
But I can be for a little bit, right? If you pay me, I'll be your business analyst. So what I've started doing with clients, if they don't know how much the project is worth then it, kind of, leaves everyone a bit in the dark because we don't know what we're aiming toward. So what I'll do with clients now is I'll suggest and recommend and try and insist we have what I call Phase Zero.
Phase Zero happens fairly asynchronously and it's usually part of that gathering data bit that I mentioned.
It's a period of pre-engagement.
It's a non-obligation kind of phase where we work together to work out how much the project is worth.
And if at the end of it, they work out the project is worth enough money to them, they know how much they're likely to make but, more importantly for me, I know how much I'm likely to make out of it myself.
I know how much to charge them for that project. In a value-based world, what I need to know is how much you're gonna make.
"Okay, you're gonna make that much? "That seems nice.
"I think it'd be fair if I take this slice of it "and what do you think to that?" So SpeedCurve.
I adore SpeedCurve.
It teaches me so much about how my clients' sites work and stuff that they didn't even know about. So the pre-engagement, what I'll normally do is set them up with a free SpeedCurve account because I find it's easier to show rather than tell. If I can give the client an existing dashboard, it's very much more likely that they'll be (mumbles), "There's the company card, we wanna keep this thing." Using customer data in A/B tests, we can work out some fascinating things.
I think most of this data is gonna be anonymized unfortunately, but a recent client...
We worked out that if we can start rendering at under three seconds, we make three times more conversions than if we start rendering after six seconds. Now we can now define the project.
We need to get as many sub-three second start renders as possible, that tells me what I need to do and the client can work out, "Well, if our average order value is X, "and we stand to 3X that number," they can then work out the value of this project. They can put numbers against it.
They know how much they're gonna make if I manage to get these three second start renders. This was a food blog.
One of the key customer metrics we captured was how soon does the key recipe image render, how quickly does that recipe render.
What we found is if we can render between .4 and .5 seconds, we get the best possible user retention.
Bounce rate is at it's lowest if we can render the images in .4 to .5 seconds, which is very ambitious. Especially considering the current start render for the image is at one second.
But at least now I know what my task is.
It's to somehow speed up the delivery of that image, but also my client now knows how much it'll be worth to them.
If they can put a number on their average ad revenue versus time on site or length of session, they can work out what their yearly ad revenue return would be.
This is a really interesting one.
I've got a bit of a, not a love-hate, but maybe just a hate-hate relationship with Cloud.typography.
Working with a client, we realized that Cloud.typography was really slowing them down, like tremendously slowing them down.
So I reached out to Cloud.typography and I did it all very amicably and it was like a responsible disclosure, and I said to them, "Hey look, "I've noticed a few things going on "and I think you're slowing a lot people's websites down." Their official line was, "We don't care." So I was like, "Okay, maybe I can gather you some data." Ran an A/B test with this particular client and a certain percentage of users were given a page with Cloud.typography missing, and then the rest were given the normal experience. Cloud.typography had a 25% impact on bounce rate. That is absolutely outstanding and the tragedy here is my client is spending money on Cloud.typography but is actually losing revenue because of it. Redress the balance, save some cash on your font and claw some revenue back.
This is the worst way (murmurs) to possibly have it. We can now put numbers and figures against this. We can now start to cost these projects up. So now, in theory, my proposal email has become as simple as this.
"If we can achieve a start render of .9 seconds, "we stand to increase conversions by eight percent. "Across one year, this equates to increased revenues of one "to 1.2 mil, I'll take X percent of that." That's fair, right? It means that we know what we're aiming for, I know how much they're gonna make.
They know they're gonna make an extra mil a year. They can't fob me off with like a couple of hundred Euro, right? It's not gonna work.
We're now both equally invested in this project, we both have reasons to gun for its success. Skipping out the entire implementation phase, we've done it.
How do we keep on top of things? One thing I really pride myself on, and it's simple and it's maybe a bit egotistical, bit big headed, but I really pride myself on being pragmatic.
It's my job to just deliver results, right? Just get things done.
One of the things I really like and one of the things my client really appreciates, or clients really appreciate, is that setting performance budgets pragmatically. Who has asked or been asked this question before, "How do we set our performance budgets?" It's a struggle, right? It's a difficult one to define, like, what should we do? I've got a very, very simple, very off-the-shelf, one size fits all answer. You should set your performance budgets wherever your worst moment was in the last two weeks. Every two weeks, or whatever time scale makes most sense for you, your budget should be updated.
Your budget should just equate to whatever your worst point in the last two weeks was.
What this means is you need to keep bringing your budget down but you can never raise it again. Most of the clients I work with don't want challenges, they need safety nets.
They don't want me to set targets, right? It's not called a performance target, it's called a performance budget.
So what they need me to do is set it pragmatically. So looking at this, we had a budget of three seconds but for the last two weeks, we came in at 2.2 second start render.
We bring the red line down to touch the curve. Our next two weeks is now defined by the highest point in the last two weeks.
This budget, we were topping out at three times over the two week period.
We can leave this one as it is, right? We need to just maintain this one.
This one, however, we were going over a lot. The budget is overshot daily.
What we need to do here is double down on this page, on this metric, and make sure we solve it.
Next, I need to normalize performance.
I need to make sure performance is discussed like it's always been there, make sure people are nonchalant about it. Make sure it's mentioned in passing.
Make sure it's discussed in retrospectives, sprint planning, feature scoping, (mumbles) to make it so that performance is such a normal thing to talk about. Have dashboards for individual teams, make sure the marketing team has their own dashboard, the business insights team has their own dashboard. Ultimately, I wanna blur the lines.
I wanna make performance just, kind of, commonplace. What I want to do is make sure people aren't discussing performance in terms of digits.
The CEO doesn't wanna know about time to a first byte. The marketing team doesn't wanna know about start render. Everyone in the organization has a vested interest in sustainability, whether that is revenues, profits, growth, whatever. Performance is actually just a proxy for what we're really discussing.
Performance is a kind of a loose way of talking about what we're actually aiming for, which is more engagement.
It's more cash, it's more signups, it's more donations. Performance is basically a proxy.
I wanna eliminate performance numerically from the discussion.
I want the engineering team to have dashboards showing them revenue, right? The marketing teams have dashboards showing them engagement. I don't want them to discuss things like time to first byte or start render.
And finally, kind of aptly, because I'm about to go over time, know when to stop.
Good enough is exactly that, right? It's good enough.
Performance never truly stops, right? It's an ongoing thing, it's a cultural effort, it's a cultural business change, but at some point, you're gonna have to stop. Performance will switch from being an active pursuit to a passive pursuit.
Good enough is exactly that.
It's good enough.
What ends up happening if you get too blinkered and blindsided is you miss out on the bigger picture. You'll make huge gains really early on, make massive savings, then businesses get obsessed.
You're gonna be spending weeks and weeks and weeks chasing milliseconds.
What they fail to realize, if they zoomed out, is that, "All right, actually if we just added Apple Pay "to the checkout process, "we'd make 17% more conversions on mobile anyway." You'll end up optimizing to a local maximum if you don't stop chasing performance, right? At some point, it switches from active to passive and it goes into maintenance mode.
With only 44 seconds left, I should say thank you very much. I'm gonna try and summarize the talk in four bullet points. Understand the situation fully before you begin. Gather as much data as you can and then gather a little bit more.
Maximize the work not done.
Don't waste time optimizing the wrong things. Find out exactly where your problems are and chase those. Calculate the value of the project.
Understand, as either a client or a provider, how much this product is worth in monetary terms and finally, know when to stop.
When have you achieved your goals? When can you take your foot off the gas and switch from active to passive? With that said, if you want any of the good stuff, you know where to find me.
Thank you very much for listening.
(audience applauding) - [Announcer] We have time for Q&A.
- Okay, cool.
Oh, I get to sit here.
- That one? - Yeah, that way I can see offstage. - Okay.
(crosstalk) - That was awesome! - Thank you. - So many great takeaways.
I was trying to, like, pay attention and tweet out things that you were saying at the same time and I was falling behind so I think I saw a lot of other people doing the same thing. - The slides, I'll tweet them shortly.
I noticed people trying to take pictures but I'll tweet a link to the slides when I get offstage. - Awesome.
Yes, there's lots of great tips and takeaways and I think, it's interesting to me how so much of the conversation around performance has kind of shifted away from just, like, "These are the tools and this is how they work" to actually, like, "Okay, how does it work "in the real world?" And trying to get people onboard with performance. So we had quite a few questions come in so I'm not gonna be able to probably get to all of them, but one question was, "Have you ever faced a situation "where you had a client approach you "and they wanted to change something on the performance side "of things, but you saw that actually "there were worse problems, like UX problems "or just the whole business model is broken?" It's a bit awkward. - Yeah, I have.
Obviously, not gonna name them, but I worked with a client this year who... Like, I had access to a lot of their, sort of, monitoring and they're, sort of, (murmurs) analytics and their conversion rate was terrible generally. And I was like, "Do you know what? "I don't think performance is why you're not making money. "I think it's probably because the entire web, sort of, "E-com part of the business is just vastly underserved." And they corroborated that.
They were like, "Yeah, it kind of, that's how it works here "but we've worked out, using some Google marketing metrics, "whatever, that we're losing 1.5 mil a year" or whatever it was.
So I was like, "Okay cool, we can work on that." And it was a good project but sometimes, it is just, yeah. Maybe you just need to overhaul your mobile checkout experience or maybe you just... Maybe no one wants to buy your product.
It's an awkward conversation to have so that's part of the initial scoping is like, "How do you know the site is slow and why is it a problem?" - Yeah, and is it the biggest problem? - Yeah, exactly. - So in that situation,
it sounds like you were able to kinda go in and still help them with ROI and as much as you could. - Yeah, yeah, the data suggested that...
"It suggested," it told us that improving performance did help but I do think that there would've been other things that could've been done beforehand.
I'm very candid about the fact that what I do is a very small part of a business's day-to-day and yeah, maybe an overhaul for their mobile checkout would've been more effective. Maybe. - You need to wear more hats.
- One man complete overhauls.
So we had a few questions coming in about metrics, which I think is interesting 'cause this conversation is, sort of, never ending. Like, you mentioned you focus on start render and not load time, and we should look at medians and 95th percentiles and things like that.
Like, one question that came in from somebody in the audience, I realize kind of everybody's kind of a different stage in terms of familiarity with the metrics.
It's like, "Why not load time?" - Okay, so load time is...
It's not terrible, right? And it's a good proxy.
If you've got 75 second load times, you probably don't have a fast website.
But it's a private event to the browser, right? Users don't care about load time.
I got clients, all the time, where third parties will push their load time back to 30 seconds on mobile. (murmurs) usable after four.
So they get really disheartened by these 30 second load times.
It's like, "Well, ditch all your third parties "and the site will feel exactly the same "but you've just got a better load time." Your customers aren't gonna go, "Oh, I'll buy it now "the load time's better." - I think you referred to it as a legacy metric which I think it is good because at one point it did use to be a meaningful metric when pages were simpler and load time was closer to what people actually experience in the browser. It's just kind of now we have...
In the early days, we had no third parties, for example. - Exactly, right?
Back in the good old days.
I mean it's a good proxy and, like I said, Analytics gives me that data and nothing else so it's...
Seems like I have to start at that point.
But yeah, usually, load time is a, like, sort of widest angle, widest view you could get and we're gonna drill down eventually when we get custom monitoring setup to more pertinent metrics.
- So we're outta time.
We had a lot more questions so, you're around? - I'm around. Grab me for questions. - Easy to find.
- Great, thank you so much. - Thank you so much.
- That was awesome. - That was great, thank you!