Opening title And we'll open with Tim Kadlec.
Now Tim is come all the way from right up near the border of Canada in the middle of the United States in a place called, well, actually, I think it's what, is it Eagle? - [Tim] Eagle River. - Eagle River.
That sounds pretty awesome.
Near a place called Rhinelander, right? Which probably you can barely find on the map. So he has a long journey by multiple means of transportation to get here. He lives up there, but travels around the world. You've probably heard his name spoken at many conferences, and I've had the good fortune to share a stage with him in Berlin and in Wisla.
So I'd hear him speak and see what he's got to offer, which is amazing.
Recently, Akamai recognized that as well and now he works with Akamai on helping people make much faster web stuff.
And also kind of helping inform standards process and making the web a faster place.
So you're probably gonna see a theme of performance emerge over the next couple of days, but that's, I think, one of those critical things in all our lives in front-end engineering, that we, you know, make it faster is really important. And I think you'll go away over the next few days with a lot of great information around that. So to kick off, and to tell us all about how to make our users at least think their websites are working faster, would you please welcome Tim Kadlec? (audience clapping) - Thanks, John.
The human brain is a pretty remarkable thing isn't it? Consider what's happening right now.
You're sitting, you're settling in, you're getting ready for two days of content, of listening to people like me up here.
And this is gonna be something that's fairly, I mean, fairly easy.
It's certainly not anything we'd consider to be exhausting or strenuous, at least, we hope not.
You know, the conference is young.
We could see how this goes.
But while you're doing this, while you're just sitting there, passively, in your chair, your brain is working in overdrive.
You see, what's happening is your brain is receiving all sorts of sensory input.
It's receiving information about what you're hearing, and what you're seeing, and what you're touching, and what the temperature is.
And it's processing all of this sensory as input. It's filtering through it and lining it up to figure out and create a picture of reality as you're experiencing it right now.
And this is a really complex task that takes a lot of work. See, all of these different types of sensory input, they're processed at different speeds.
So for example, touch is processed much faster than sight or sound, and if I were to do this, (claps) my brain combines the feeling and the sound and the sight all together to create one, unified event: a clap.
Not only does it have to align all these things up in order to do that, but it has to filter through because thing about all the things that you're hearing and seeing and smelling that don't relate to what you're trying to focus on right now, right? Your brain is pushing all off that aside and saying, "That is not primary.
"That is not part of what I want to be paying attention to." And once it's filtered through this, it then has to apply some sort of pattern recognition. Pattern recognition based on your past experiences and beliefs to infer some sort of meaning from these signals.
And these pattern recognitions do vary based on your past experiences, which is why is some people look at a piece of toast, they see God, and some people just see a piece of toast. It's also why we are able to do something like this. We can get lines and dots and shapes, and if we put them up in just the right order, it looks like a face, even though it's just lines and dots. And this is what people mean when they say that perception is reality.
But I don't think that's, it's not entirely true. I've always had a little bit of a beef with that statement, I think, because there are some things that are real no matter what, right? So the edge of this stage is a very real thing, and if I were to fall off of this right now, there's gonna some real consequences, right? No matter what I perceive about the edge of this stage. So I think a more accurate way of putting it is perception frames reality.
It's the lens through which we view it.
Every individual moment of our day is comprised of our perception of that experience. So the whole process is incredibly powerful, incredibly complex, and actually really easy to manipulate. So does anybody know who this is? This is the President of the United States, President Obama. And it seems okay, right? But if we flip this around, suddenly we're going to have a little bit of nightmares and need some therapy here because this is messed up. The eyes are messed up, the mouth is messed up. I didn't change anything about the photo.
It's just the way that it's been arranged.
So what happens here is your brain looks at the first version, and it says, you know, something's off here.
I mean, it's upside down, it's not quite right, but I see eyes, I see a nose, I see a mouth. I know what a face is supposed to look like, so that's what I see as the face.
And it's only when you face it, face it, oh gosh.
Face it straight on that you realize just how messed up it really is.
So here's another example.
These are two shapes.
One of these is a kiki, and one of these is a bouba. It's a lot of fun to say, by the way.
I practiced this slide more than any other slide in the deck just so that I could repeat those words. Kiki and bouba, it's fun.
How many people think that shape A is the kiki? How many people think that shape B is the kiki? All right, so much less hands.
All right, everybody's wrong because it's actually two random, abstract shapes with no name attached to them. (audience laughing) And yet, when this study is done over and over again with one random shape that has sharp angles, and one random shape that has sort of this blobby thing, over 90% of people respond that shape A, the one with the sharp angles, is the kiki. And again, your brain is scrambling to infer some sort of meaning from this, and all it has to work on is this shape, and how it looks as well as the name.
So kiki is a very kih-kih.
That's a very sharp sound, right? And bouba is actually very, kind of round.
In fact, your mouth does like this thing where it rounds out and fulls out the sound. And so your brain says, okay, well, I've got this sharp angle thing, and I've got this sharp sound, kih-kih.
That must be the kiki.
And I've got this sort of round sound and this blobby thing, and that must be the bouba.
And so it infers meaning, it creates a connection where there is none. So our brains are very easy to manipulate, and as it turns out, time is one of the easiest ways to mess our brains up.
How many people think 300 milliseconds is a really long time? Yeah, there's always a few hands that go up. Those are the people who do performance work inside of their organizations right now 'cause most normal people know 300 milliseconds, it's not, I wanna show you what 300 milliseconds is. I wanna show you how long it is, and I wanna show it to you in the most exciting, dramatic way possible.
So what I'm gonna do is I'm gonna move this box to the other side of the screen in 300 milliseconds. It's gonna be exciting, everybody ready? There, did you see that? Wanna do it one more time.
One, two, three.
There, that's it.
That's 300 milliseconds.
Now, I don't know for sure, it's possible, but I'm gonna go out on a limb and say that most of you out there right now are not sitting there thinking, (sighs) "Oh my God, that is 300 milliseconds "of my life I am never getting back.
"What the heck is going." (audience laughing) Because it's not a lot of time, right? And yet, this is a video from an experiment using Occulus Rift, and what happened was people, (audience laughing) people walked around with this thing on their face, and it fed them what they were seeing, but with a 300 millisecond delay.
That's it, just 300 milliseconds.
(audience laughing) And it turned them from functional human beings into a bunch of complete oafs.
It doesn't take very much time to throw us off. In fact, when it comes to digital interfaces, we're susceptible to even smaller delays.
There have been numerous studies, but most noticeably those done by Walmart and Amazon, that have showed that even something as small as 100 milliseconds, shaving 100 milliseconds off the load time had a corresponding 1% increase in revenue. And again, nobody's sitting here being like, "This site is 100 milliseconds slow.
"I'm not gonna order this yodeling pickle." That's not the way this works.
What happens is it throws your brain off just enough that you alter your behavior in just some minor, little way that causes you to maybe click one last link, or maybe push one last button.
Now, 100 milliseconds, as it turns out, is a really important threshold when it comes to perception of performance. Studies from as early as the late 1960s found that 100 milliseconds is how long we have to respond to user interaction for it to feel instantaneous, for there to have a direct connection between what the user does, and then what the machine does in return.
Those same studies found that 1,000 milliseconds is how long we have to remain in a state of flow, to remain focused on the task.
Any delay longer than this and we started to get distracted and think about other things.
And this presents, let's say a challenge, because we like to be optimistic at the beginning of the conference, at least. A challenge on the web because there's a lot of work that goes into getting the web page to display on your screen.
So let's say I wanna load up the Web Directions' Code page on my mobile device, all right.
Here's what has to happen: I have to type in that URL, and the first thing that happens is the browser has to figure out where that URL actually points to, right? What is the IP address, what is the address of the server to get that content from? So it does this DNS resolution.
The DNS lookup is, you know, hopefully, ideally it's pretty quick.
Your browser's trying to cache very aggressively, your operating system's caching, but sometimes you do have to go out.
You can actually, if you have Chrome and you open it up and go to chrome://dns, they actually have histograms that'll show you how long it takes for the resolution of DNS, you know, as you've been browsing.
And when I loaded this up on Canary, it was somewhere between 20 to 100 milliseconds is where 80% of my DNS resolutions were.
That's how long they were taking.
So it's not free.
After the DNS lookup, now it knows where that server is. Now it needs to open up a connection, and it does this in TCP land using a three-way handshake. You know, the client sends a packet to the server, the server acknowledges, and then the client acknowledges, and they move forward.
I've got a coworker who calls this the yo, yo what's up, sup handshake.
He doesn't get out very much, but.
(audience laughing) Once you've got that connection open up to the server, now increasingly our sites are on HTTPS because security is important but also because we're not getting to use any toys if we're not on HTTPS. There's no service records, no H2, geolocation, things like that.
So increasingly there's another step in the process here where we have to go through this SSL negotiation, which can take as many as two round trips.
And then finally, we get to actually make the request for that first request, that first asset, just the HTML. And if we're lucky, if they've done a good job with the site and kept it small, it fits under maybe 14 kilobytes, and it comes back in a single trip, otherwise we might have to go back and forth between the server for a while until that file has downloaded all the way. So it's very chatty.
Now, let's put some numbers to this.
Let's say that it takes about 100 milliseconds roundtrip. This is not obnoxious, this is not like amazing speed, but this is also not obnoxious.
If you load this up on a mobile network, you're often going to get results that are much, much slower than this.
In fact, you have this whole other step in front of that where you're doing negotiation with the radio towers and stuff that it slows it down even more.
So when I say that we're spending 400 to 500 milliseconds of our performance perception budget on that first request, that's not a best, like that's not a worst-case scenario, right? That's kind of middling, that's kind of somewhere in the middle.
And this is why every performance advocate ever has gone up on a stage at one point or another in their career and said that the fastest request is no request at all.
And I do mean every performance advocate.
It's actually like we have a club, and to get in the club you have to agree to say this at least twice year.
You're also not allowed to use any frameworks, and you have to master the secret three-way handshake. That's a terrible joke.
I'm kidding, but this is an important thing to remember. There's a reason why this has caught on, and it's because everything that you add to your site, everything that you add to your page has to have value because everything definitely has a cost. There is nothing free.
And so it's important that we sit there and we ask, do we need this other script? Do we need this other image? Those are important questions to have.
Now, the challenge we face is that increasingly the answer is yes because people are expecting their sites to perform faster and faster and faster, but they're also expecting their sites to be richer and richer and more immersive.
They're expecting sort of the best of both worlds here. And so the question we increasingly have to ask is how to balance that, how to balance that fidelity, that richness with the speed. And this is where initially, usually we jump right in to the technical stuff from here, right? Like this is, and in fact, to be fair, I'm going to give you a bunch of technical stuff in this presentation.
But there's a risk in jumping right to the technical side of things, right? 'Cause I think that that's why for so long our industry has sort of struggled with this idea that performance is the developer's responsibility, right? I mean, no surprise when every time there's a performance problem, we solve it with another technical solution. We immediately jump to that without stopping to think. There's a big risk here, though.
So there's this story about a multi-story office building in New York, and the office manager was getting a ton of complaints from all the tenants about how slow the elevators were to go up and down, especially during peak, I mean, we've all been there when we've been in the elevator and it gets off on every single floor and like 30 minutes later you're at the lobby.
They were getting so many complaints about this that he felt like he had to do something, so he asked some engineers to conduct a study. And so the engineers took a look at the elevators, they took a look at the mechanics, and figured out what was going on.
And they came back and they said, "You know what, we actually can't make it any faster. "With the equipment, it's as fast as it's gonna go "and still be safe." Now, this doesn't solve the problem, right? So the office manager still has this issue to deal with. So the manager did what a manager often does, he called a meeting.
And in the meeting, as legend has it, one staff member was a recent psych major, and that staff member approached the problem from a different perspective.
So everybody had been so caught up in the elevator itself, and the mechanics of the elevator and the technical limitations.
He approached it from the perspective of what about the people in the elevator? Like what is it about this delay that makes them complain? I mean, we wait for things all the time.
What is it about being in the elevator that makes them complain so angrily? And so he thought about this, you know, okay, we've got a big metal box, and we shove people inside of this with other people that they may or may not know, and then we expect them to sort of wait while they're anxious to get to wherever they're trying, whether it's back to their room or to the office, or they're heading out to go get some lunch, they wanna get out of there.
Maybe they're bored.
Now, today what we would do is we would, I think it's drop a lure or some incense or something so that the Pokemon, I don't know exactly how that's, we lure some Pokemon with some incense inside of the elevator, and then people would be distracted, right? But they didn't have that sort of technology back then, so what he did was mirrors.
He put mirrors in the elevators.
And the idea here was that well, if there's a mirror there, then people can kind of distract themselves, right? If there's a mirror in the elevator, I can look at the mirror and I can sort of, you know, check myself out I guess.
I can actually technically people watch.
This is something that they pointed out when they decided to put the mirrors in there. And apparently, that's less awkward or creepy than doing this to the person next to you.
I don't know, I think it could still creep somebody out if you were, yeah.
But they put these mirrors in the elevators, and it worked. People stopped complaining about how slow it was. People were distracted enough, they had something to occupy them.
Now the lesson here isn't that we are vain creatures who love to stare at ourselves or creepy creatures who like to stare at everybody else. Not all of us.
The lesson here is that performance is about the user. It was only when they shifted their focus to the user and what the user was going through in that elevator that they were able to solve the problem.
And it was not a technical thing.
And this is so key because when you shift your frame of view, it shifts this from being a technical problem, one of technical limitations, to one of creativity.
It's creative limitations.
It's only by focusing on the user and focusing on what's happening, that you realize what's really going on here, and what really matters.
You see, when they get into the elevator, there's the starting point of this whole entire experience, an then eventually they get out of the elevator. And every little moment in between that is time spent waiting.
Now this waiting can be broken down into two basic categories.
You have passive waiting versus active waiting. Passive waiting is where I'm sitting there with nothing to do.
There's nothing I can do to make this go faster. There's nothing I can do to really distract me. I have no choice.
I'm just kind of sitting there.
Active is the opposite, right? We have something we can do, whether it's a physical activity, whether it's visually processing something. There's something to keep us moving and feeling like we're moving forward at least, and getting some sort of progress.
And the passive waiting, that's the killer. Because there have been studies around queuing and things like that, that show that when people have to wait passively, they overestimate the time that they spent waiting by about 36%.
This is the stuff that bores us down.
So when we're looking at this from a technical perspective, when we're looking at our websites and applications, we need to try and shrink that passive waiting as much as possible.
So a great example of this comes from iOS.
For years, they've been doing this animation inside Safari when you go to open a new tab. So what happens here is I click the link, and you see this little animation where it kind of tilts everything back and slams it all forward.
Now, this is far from just visual candy, right? What's going on here is a couple things.
First off, it's giving us a little bit of a mental model. It's letting us know that these pages are not being deleted, these other pages that we're not viewing anymore, they sit behind here in sort of a stack, right? So there's information portrayed that way.
But what it also does is that it actually gives Safari a head start because what Safari can do is as soon as you tap on that Open a New Page, it's starting to load that page.
It's already going out and doing whatever DNS resolution, establishing the connection, those things that we talked about, all the while, while it's animating.
And this is only like a 300 millisecond animation, but as we've seen, that can be a meaningful head start. Now, on the web, this used to be a pretty challenging thing for us to do, like to sort of load things up ahead of times and get a head start on thing, but that's where it's starting to change.
So resource hints I think is a big key player in this. In 2011 basically, WebKit wrote this post about the speculative parser, look at parser, pre-parser, whatever browser vendor you talk to, they might call it something slightly different. The idea being that instead of just parsing and requesting stuff, and then sitting in, and then moving along, it would try to scan the HTML and look ahead and find things that it can preload or pull in ahead of time, so that it gets a head start on these connections.
And it's a really powerful thing.
I think they reported like 20-plus percent improvement. Every major browser has this in some form now. The only problem with it is as developers, we've never been able to talk to it before. We've never been able to communicate to it. And that's where resource hints come in.
They let us, knowing what we know about our sites and applications, tell the browser to do some of this stuff ahead of time.
So, I wanna walk through this really quick. So, with resource hints, you can include it in one of three ways on your page.
You can include a header, link header here. You can have a link element just like if you would if you were including a CSS or something like that.
The DNS could already be cached, and you might not see improvement at all.
Or it could be anywhere that 20 to 100 milliseconds. It just depends on the scenario.
But this is a tougher one to gauge.
It's quite possible that you're gonna throw a DNS prefetch out there and actually not be able to see anything super meaningful on your metrics.
There's just a lot of variables involved.
Preconnect is DNS prefetch on steroids.
Preconnect says the exact same thing.
Give me a domain name, but now I want you to go through the whole process. I want you to open up the connection.
So not just do the DNS resolution, but open up a TCP connection, go through the SSL negotiation process.
Now we're starting to get a little bit more of a head start here, a little bit of a more serious way to move the needle.
It's not supported as well as DNS prefetch, but thankfully it's pretty easy to use these in sort of a progressive enhancement approach. You can include DNS prefetch to the domain, and include preconnect just below it.
Now, browsers that support DNS prefetch only are gonna just do the DNS resolution.
And browsers that support preconnect, will go a few steps further, and it's not gonna get any weird thing where it's trying to open a ton of different connections to the same domain. It's smart enough to collapse those down.
Preload, preload is for individual assets.
So preload says that I know that I'm going to need this stylesheet or this script or this image and it's going to be on this page, so why don't you go ahead and do everything? Make the request, if you have to, do the DNS resolution. Do the whole thing, and download this asset ahead of time because I know you're gonna need it.
So this lets the browser do this without having to do the preparsing step.
It can get this clue from the developer directly. Now, prefetch is basically preload, but now we're talking about the next page, the next page that you're gonna go to.
So a preload is for that specific page that they've opened. Prefetch says, I think that I'm going to need this stylesheet for subsequent page loads.
So maybe you've loaded, maybe you've got your styles chunked out, and so you've got a small, compact stylesheet that comes in right away for this page.
And then you've got the full CSS that you wanna load up so that it's there for everybody else.
You could potentially request that with a prefetch, And it does the same thing, it's gonna go ahead and grab it all and put it in cache so that as you move forward on the pages, there it is for ya.
And then prerender, prerender's sort of the, it's the most impressive one, I think, but it's also the one that you could potentially shoot yourselves in the foot with.
It's a little bit riskier.
Prerender says, it's, again, it's about the next page, but prerender says, they're gonna go to this URL.
I'm pretty sure they're gonna go to this URL next. So why don't you basically open up an invisible tab, download all the assets, apply the CSS, build the Document Object Model, go through the entire process, and then if they move to that page, it just swaps the tab right over.
Now, if you've ever done this or seen this working, you know, if you've ever looked at an AMP document 70% of their performance sugar is all aggressive preloadings, prerendering sort of stuff their own little way, but still the same concept.
It's impressive when it works.
It is instantaneous and you're like, ah, this is how the web should be.
The risk here is that if you go through this entire process with prerender and have it do all this stuff, it's not trivial, right? The browser is working to do all the parsing, and to do all the layout, and all of these things. Plus, you're downloading potentially all this data, and if people do not go to this next page, then you've sort of, kind of, you know, wasted data for no reason.
Now, it used to be when this first came out, you could actually mess with some browsers with this too. Like I had one organization right after prerender was first supported in one of the IE10, IE11s, somewhere there, and they were like, this is great, we're gonna prerender everything, and they put like 20-plus links and the browser would crash. So don't do that, I don't think you can kill a browser anymore, I haven't tried for a while, but it would be kind of fun.
So if anybody feels like maybe experimenting with that, that would be a cool thing to report, like how many kills it if it kills it.
But I would assume they're smarter by now, but you still wanna be careful because it's still potentially costing the user data even when they don't need it.
So again, DNS prefetch, preconnect, preload. Those are all focused on the current page.
And prefetch and prerender is focused on the next navigation.
So these are really powerful things if you use them right because they let us get a little bit of a head start. They let us start to be a little bit more proactive with performance, but we can go even further. So if we go back to this Web Directions site, this is, we talked a little bit about what the network, what's involved in establishing a connection. Let's talk about what happens once the browser gets that HTML back, what happens.
These are what it needs to create the Document Object Model, a representation of all the elements in a page, and the CSS Object Model, a representation of all the CSS within a page that might apply to that.
It can alter the DOM or the CSS Object Model. So you could do things like document right, let's go grab a cuppa and get some brekkie. And then you could change the style or the width or something of an element on the page.
The first one changes the Document Object Model, the second one changes the CSS Object Model. And so the browser doesn't know ahead of time, so the browser, it has to assume the worst. It has to assume you're gonna mess with something, and that's why it blocks.
Once it has this, once it has the DOM and the CSS Object Model, it can create a render tree, so now we're talking about a representation of the elements that are actually going to be displayed on a page, and the CSS that applies to them. So things like comments, which would be in your Document Object Model, are not gonna be in your render tree.
And then layout, I like to think of this, whether fully accurate or not, it's close enough. I like to think of this as sort of taking Lego blocks and figuring out where everything's gonna sit, and what dimensions they're gonna have.
And then finally, painting this out onto the screen. This process, getting from the network to this initial render, this initial paint, this is what we're talking about when we say the critical path, optimizing for the critical path, if anybody's heard about this.
And anything that stands in the way of that initial render is a critical resource.
So let's break this down.
Here's a little bit of a, just a little chunk of HTML. Even on this simplistic site, there's still three critical assets here.
The first is the HTML.
The browser needs the HTML to do everything. There's nothing we can do around that.
The second is this stylesheet, and the third is our script.
So let's zero in on the script first.
Let's zero in on jquereactangular.
(audience laughing) It's probably in production somewhere.
And let's talk about why this is blocking, and how to get this out of the way.
So what happens normally is that a browser is parsing the HTML, it comes across this script element, and it has to pause things so that it can parse the script, and it can execute the script, and then it can carry on its way and continue on. It blocks the entire process.
It's not too hard now to shift it out of the way. So we have the async attribute.
So if we add the async attribute to our script elements, what happens now is the browser comes across it, and it's gonna parse in parallel, and then pause only to execute before it continues on its way.
So now we've sort of taken most, we've taken all the parsing time out of that blocking area. There's also defer, which on paper sounds amazing.
Defer is like the, it's async but with more juice, a little bit more steroids involved.
Defer says, when you come across a script, delay, do the parsing, but then delay the execution until everything else has been done.
Like we know it's not gonna manipulate anything, so push it way to the end.
Now that sounds great, but the problem with defer is that there are some pretty serious bugs in IE9, IE10 for eh-vet and in terms of the order of execution for these scripts that if you are at all concerned about supporting those browsers, you probably shouldn't be using defer unfortunately. So we'll slap an async attribute, and move that out of the critical rendering path. Now the stylesheets, that's a little bit more intense. There's a little bit more work involved here. So if we open this up again on the Moto X, there's this initial content that I see right away. And then there's all of this junk down here. Oh, it's not junk, it's really good information. There's all this really good information down here that I do not see on the initial page load. That's harsh.
That I do not see initially, right? So when I'm loading this page, what I'm looking at is this. How quickly does this initial viewport come into display? I actually have no idea how long it takes this stuff to load.
And so from a CSS perspective, I can figure out what selectors apply to, you know, these elements up here.
And that's the CSS I need, that's the critical CSS, the CSS I need to be able to see things right away. So if I know this, what I can do is I can replace the link to the full styles with an inline chunk of the CSS, a subset of the CSS that applies to what is inside of that initial viewport.
They do good work.
So in this case, what happens is the critical CSS is now inline, so when I make that request for the HTML, boom, it's there.
It's in the initial request, the browser has everything it needs.
And then the full thing can be loaded asynchronously after the fact. I can set a cookie so that on subsequent page loads, I don't have to do that, I don't to go through the asynchronous loading. But now I get the best of both worlds.
On that initial page load, everything is there. On subsequent pages, the full CSS is in cache. Now, by doing this, we've now reduced our number of critical resources to one.
It is the HTML.
Now, there's a few things I often hear about the critical CSS technique in particular. One is that if you are like me, and you kind of were raised up in sort of a firm web standards sort of approach to doing things, inline CSS can feel a little icky.
You know, you may, the first time you do it, or the first two times you do it, feel a little like Jim Carrey in Ace Ventura when he has to like plunger to his mouth, and take a shower, and stuff like that crying. You get over it.
I mean, it's a good performance thing and it's not that bad. In this case, it is a little bit of a hack because we're pushing stuff inline, but it's getting the job done, and it's making the experience much better. Now, the other thing that I hear is that it's a very difficult process, and if you do this manually, it's absolutely right. It is a very difficult process, and you will not, it will not last.
Trying to manually go through and figure out what's in that initial viewport does not work. This has to be automated, whether it's a service that you're paying for, whether it's a build process that you have where it's analyzing these things, but if you do that with these key templates, then it sort of can maintain itself along with your build process.
It's not too bad.
And it does make a pretty significant improvement in performance. Filament Group, when they were talking about this stuff, took a Wired article and they loaded it up on 3G, and then they changed the Wired article. They didn't change anything about the wait of the page, or the requests, or anything like that.
They just did what we talked about.
They had some inline critical CSS.
You can see the load time is about the same, but I've got content coming in way earlier. So now I can scan headlines and scan body.
I have stuff to look at and stuff to process. I've eliminated most of that passive waiting time. So this is great for doing that, but what happens if we can't do that? What happens if you're at a, I don't know, credit card company and you have all these different services that you have to connect to and go through to be able to for security purposes. You know, some of those things are pretty old, and really, really slow, and really, really hard to move forward in terms of optimizing for performance.
So what happens if we can't eliminate the wait? I'll tell you what we do right now, right now we slap a little spinner on there, right? I really hate the progress bar, progress spinner. I'm not a fan.
To me, it's a very uninspired solution to the problem.
See progress bars, in my opinion, are the hold music of the internet.
What we're basically doing when we put these things on, is we say, "Okay, hold on, hold on, hold on. "This is gonna take a little while.
"Why don't you guys sit back, relax.
"We'll throw on some Kenny G.
"It's gonna be great, you just wait a few minutes." All progress bars, all these spinners really do is they call attention to the waiting.
They focus on the waiting, they focus on the fact that you're waiting for something that you really want but you can't have it yet.
This is a lesson that Luke Wroblewski wrote about with his now defunct app, Polar.
Polar, the whole idea with Polar, if anybody had, has anybody used Polar when it was out? Maybe they make it, a couple of hands, okay. So Polar, the whole idea was it's about micro interactions. It was a mobile app where your polls were one photo versus another photo.
And the idea was you see the poll, you select a photo, you scroll up.
You know, it's this really quick, sort of, I've got time to kill and now I'm just apparently gonna vote on a bunch of polls.
So they did a lot of stuff to optimize that interaction, and make sure it was as quick as possible.
But they still had to go out to the server occasionally to get additional information, additional polls, or profile information.
And the problem is they have no control over that, right? They have no control over the terrible 3G, 2G network conditions, or if you happened to be going under a tunnel or whatever it was, like that's something they don't own.
So there were delays, and when there were delays they'd throw a spinner in the experience to break it up. And everybody kept saying that Polar was really, really slow.
It was a very slow app.
So they decided to change it up a little bit. And they changed to something they called skeleton screens. So instead of showing a spinner, what they'd do is they'd have these sort of gray blocks, and little bits of text.
And then when the server came back with information, they'd progressively fill this out.
Now, the skeleton sort of sets out the expectations for you using the app, right? You can see where the photo's gonna come in, and where some of these text is gonna come in. You have stuff to process.
And when they did this, the complaints about it being a slow app vanished. People stopped complaining it was slow.
They had things to occupy them again.
As Luke pointed out, skeleton screens focused on the progress.
It's a subtle shift, but it's a shift that frames the experience in a positive light instead of a negative one, it's a shift from focusing on what you don't have to focusing on what you're about to get.
And this is something that's becoming increasingly common on the web.
Facebook does this.
If you've ever loaded Facebook on a slow connection, you'll see there's these little gray bars that are very light.
They're just empty divs.
They're divs with inline, like one little line of CSS saying what color, what gray color to put in the background. But they stub out where all that content's coming in. This is also something that the application shell model that service workers is enabling, that Google's been talking a lot about enables, where you're able to use service workers to cache a shell of your application or site so that it's there on the device, and then now when you move from page to page, you can pull in the dynamic content, but the shell remains the same.
I'm not gonna get into the mechanics of that. There's gonna be a great presentation on service workers later.
But it's enabling this sort of experience, where we can start to stub this content out. One of my favorite examples is actually from Travelocity. So this is Travelocity if I go to book a flight. Now you'll see that there's this little progress spinner thing, ignore that. They used to have something better there.
I don't know why, I've got to talk to them and see why the shift happened.
But focus on this box over here, okay? So when I go to look for a flight, this box is going to start to update.
And what's going to happen is as I hit Submit, it's saying, first off, that it's checking with the airlines that fly this route.
And now it's going to start to sort these things. It's gonna start to, searching for the shortest flights. It's gonna start to compare costs, and do all of these things for me.
Finding the best fares.
Now, every time I've ever shown this example, there's somebody, there's somebody in the room, maybe multiple someones.
One time there was literally somebody who went like this, and it was right in the middle, and they were just looking up at me like, you know like, "Tim, you're an idiot." Like, they're not doing these things when they say they're doing them, all right? (chuckles) I get that, I understand.
They're kind of lying, but it's a nice lie, it's a nice lie. They're not doing these things when they say they're doing. What they're doing is they're exposing all of the information, all of the things that I don't have to do.
I don't have to search through these.
I don't have to search for the shortest flights. I don't have to compare fares.
It's doing all of this stuff for me.
And people value the results of this much more than if that information is not presented.
They've done experiments specifically around travel industry with this sort of information, where you get this update and all the things that it's supposedly doing in the background versus the standard progress indicator.
And people are willing to wait up to five to 10 seconds longer when they're getting that feedback because they realize that something's going on, and they're also being made aware of all the things they're not having to do. They're willing to tolerate that wait much, much more. That feedback shows progress.
That feedback also eases anxiety.
If you think about it, I don't think any of think that booking travel is a super fun experience, right? There's a little anxiety involved.
And this is important because when we are anxious about things and when we're stressful about things, our perception of time slow down.
So what happen is in high-stress situations, your brain becomes hyperactive and it starts to collect more and more information, and it lays down more and more memories.
This also happens in situations that are new to you. So when you look back at that, there's all sorts of information that's been laid down, all sorts of memories.
And so it feels like that took much longer. This is also why time seems to, you know, how quickly time passes changes when you're a kid versus when you get older, and you have less and less new memories.
So easing the anxiety is gonna help your brain settle down and not lay down so many memories. So when I go back and think about that experience that I just had, it's not going to feel like it was quite as long.
But you do need to be careful because going faster isn't always better, which is always the weirdest thing to say as a performance advocate on a stage, so that doesn't get tweeted, that doesn't leave this room.
But sometimes it's okay to go slower.
Don Norman, co-founder Nielsen Norman group talked about working on a app for H&R Block, a tax app.
And this application, you know, it's on a computer so it moves pretty quickly.
So it gets to some steps, you know, if you've ever done taxes by hand, there are some steps that are like super painful. And it takes forever, and there's a lot of stress, and you're confused, and you have to do all these calculations, but it's a computer so the computer can do this instantly.
So this is great, right? This sounds awesome.
An application that's gonna do my taxes immediately, I don't have to do the calculations.
But when they went and they tested this experience, what they found was that they would get to this part, this part where the computer was supposed to be doing these calculations, and the users of the application would look at it, and get the result instantly and be like, "Oh no, no, no.
"That takes time, there's no way you did that." We know that computers are faster, and yet we still don't trust it to be actually be doing all the work that it says it's doing.
So Don had an interesting problem here, right? It was too fast, so what he did is he inserted an artificial delay.
It was like some sort of a random second calculator, that when you got to this step of the app it would calculate things instantly, but instead of displaying it to you, he'd insert an artificial delay and give you some information.
Be like, hold on, just wait a second.
We're checking tables and we're doing this and all that jazz.
And then eventually the results would be displayed to you. And suddenly people trusted it, right? They now believed that the computer was doing all the things that it said it was going to do. This is called operational transparency.
The idea is that when you make people aware of all the work, make people aware of all the stuff going on behind the scenes, and expose it, people are going to feel like it's, they're gonna trust it more, it's gonna feel more valuable to them, and it's also gonna feel a little bit faster. And now this is extremely common.
There was a great article recently that dissected all the different applications that you use, and services that you use that are doing this right now. And it's always around these high-stress situations. You know, Facebook if you're doing a security check. This thing can happen almost instantly, but instead they draw it out, and they say things like, "Checking your post." And they make the process go very slow because they want you to feel more secure, and they know that if they give you that immediately, you're not going to.
This is like Wells Fargo has these retinal scanners to verify your identity at the bank.
And again, that's another one of those situations where they had to slow it down because if I step up to a retinal scanner to be able to get to (chuckles) all of my money, and that's my sole form of identification, and it's like (imitates air whooshing), okay, that's you. I'm not gonna trust that, right? I want that to be like, I want you to have to hear some bleeps and bloops, and see some lights moving here.
(audience laughing) And I love this because it really highlights that, you know what, performance is not a blind race to the finish line, right? Performance is really, it's about choreography. It's paying close attention to the user and what they're feeling, and knowing when to slow down, and when to speed up.
It's about building sites that are easy to use, sites without unnecessary friction.
Because, you know, forget the metrics for a second. Put all the metrics aside.
I love metrics as much as the next person, but it doesn't really matter at the end of the day, right? What really matters is how the site or application feels to the user.
That's what matters.
That's what's at the heart of the performance. You know, it's about considering how they feel because perception matters because perceptions frames reality.
That's how we're going to judge those experiences. People want those experiences.
They want things that are rich.
They want things that are high fidelity.
They just don't wanna have to wait for them. And if we focus on the user, if we shift to this user-centric performance model, and consider what they're going through and how it feels to use our sites and applications, then they won't have to wait.
(audience clapping) - Thank you so much, Tim.
We've got time for a question or two, and we've got an incentive for people to ask questions. So we've got some issues of Offscreen magazine. If you don't know it, it's an awesome magazine about the people behind the bits and pixels, and it's produced and edited here in Melbourne by Kai Brach.
Many of you will know him, but if you don't, we've got some to give away. So ask a good question, and we'll give you a copy of that. And I think we had one just here.
So we'll get a mic, we'll run that 'round.
- Hi Tim.
I just wanted to ask about the CSS inlining you were talking about.
So you just include the CSS you need.
The big issue that came into my mind, looking at that, was you have no idea how big the screen is, so which parts are gonna be visible? I could be loading my home page on my phone or on my vertical 27-inch monitor at work, which load pretty much any site in its entirety in a single page.
How do you deal with that? - Yeah, so how do you deal with the fact that you don't know what size.
You can pick a big size.
So in the build process, what you can do and the tools, so Filament Group has a tool for this called CriticalCSS that can be included inside of Grunt, Gulp, any sort of Node-based process. There's also Penthouse, there's a few other ones. All of them as a configuration option give you the ability to set a width and a height. I can't remember what the default is, at least, in Filament Group's, but it's quite wide, and significantly longer than it needs to be.
So it's not gonna collect all of the CSS, but it's a little bit more aggressive.
Ideally, we'd be like oh, boom, this is the size, and we'd be able to generate it on the fly and spit that out and have just the CSS for the very specific screen real estate, but that doesn't work so well, so what we have to do is go big, collect all the CSS that might apply to that area, and then serve that up.
So as long as that CSS is still small enough, the CSS that's composed, it's all right.
If it's under that 14 kilobyte size when it's bundled inside of the HTML, then it doesn't matter if I've collected CSS for just this little subset, or the bigger screen, as long as it fits within that single response back, I'm still gonna get the benefits.
So the idea is just go big, and acknowledge that you're gonna have some CSS that's not gonna be needed in that critical CSS, but if you can keep it small enough still within there, and usually you can, you're still gonna get the same performance benefits. - That's a really good question, so I think-- - That was a great question.
- That's right, have you got your copy? You've got a copy, it's well earned.
All right, okay why don't we just, probably running poor Mel up, just back a bit. Did you have a question back there? No, over here.
I don't think Codral has kicked in yet.
Poor Mel has kind of just come, mate just went skiing all weekend, so I don't know just how much sympathy I've got with her. Far away.
- Thanks very much, Tim.
I'm wondering if HTTP/2 will factor into any of this. - Yeah, what does HTTP/2 play into it? So theoretically, H2 has a feature called server push, where if anybody's not familiar, server push basically lets you, before the browser ever requests anything, you can proactively push it down to the browser, push an asset down.
So there is actually, with the Filament Group, I worked with them to experiment around some server push implementation to try and replace critical CSS with server push. So instead of having it inlined, let's still create the critical CSS 'cause that doesn't change in H2, you still want that small size.
So we generate the critical CSS, but then push the full CSS down instead of going through the process of having it loaded asynchronously. And the results were not good.
So, it actually came in much slower.
Now, this isn't, I don't think this is always gonna be the case. I think server push is really, really potentially powerful, but also very, very young.
And so it's got all these other variables like priorities and dependencies that have to be figured out that both the browsers and the servers haven't perfected yet.
And so, as a result, because we're in such green field area with that, they're not optimized for it.
So while right now using server push is probably not something you want to do instead of critical CSS, in a year, maybe two, I don't know, yeah.
I eventually suspect that there will be some maturation there and some experimentation, and they're gonna figure those things out, and it's going to be able to replace and simplify the process.
Like I said, you're still gonna want to create the critical CSS, I think, and send that down independently, but server push could simplify the process of getting the full CSS in after the fact.
- Excellent, another good question.
All right, time for one more before we break for a cup of coffee-- - Can we get some from over there just so that she has to go all the way over or no? (audience laughing) - There's the perception of performance, (drowned out by Tim's laughter) taking a long time. Who was over here, who had a question? - No, no, I was just like, I was being mean, I was just being a jerk.
(audience laughing) - Has anyone got a question? One more, got time for one more.
Oh, right at the back. - [Tim] Right at the back.
- Wind your way through the maze.
Speaking of which, maze, tomorrow afternoon, you'll be amazed by something that is maze related, all right.
- Hi Tim, thanks for the great talk.
In the React world, there's a lot of talk about CSS modules and programmatically generating a lot of CSS.
Trading the space for confidence that there won't be clashes.
How much of that is a tradeoff with the performance, particularly with your critical CSS section that you talked about? - Yeah, where do you work? (audience laughing) Do you work at Facebook? - No. - Okay, good.
You actually have the perfect segue then for, Josh's presentation is going to go into, I don't wanna do any spoiler alerts here.
So Josh has a presentation about React and about optimizing for performance using React, and a lot of that comes down to reducing how many roundtrips, and really it plays into the perception.
It's actually a perfect segue for that presentation, so instead of answering, I'm just gonna say watch Josh's talk and you'll get it. - That's tomorrow morning at the second session. All right, we actually have time for one more. I'm just checking to make sure.
I'm not playing Pokemon up here, by the way. Just making sure the food's all set up.
They're saying, "Yeah, give them a couple more minutes." These are really good questions.
These are way better questions than Sydney. What, are you dancing? (audience and Tim laughing) - Yeah. - [John] See what I did there.
- I mean, there's nobody here from Sydney, right? - We kept them up there, right? - Okay, then yes, there's way better questions in Melbourne, yeah.
- See, that's one of the advantages of getting the second. One more.
- Hi, thanks for talking today, it's pretty great. A lot of us have seen these things when we were in Amsterdam, so it's good.
So my question is with the preloading and the prefetching headers.
There was a bit of confusion about how that actually works. Just wondering if it would work with HTTP/2 and SPDY protocols, or is it just HTTP 1.1? - Oh, so resource hints is independent of whether it's H2 or H1.
It's completely removed from the network stack in terms of that, that per, yoor-up, eck, yoo. Hold on, let me speak with words that are real. (audience laughing) So resource hints has nothing to do with H2 or H1. It's purely about what the browser supports. So it's very complementary, and it's very beneficial in both scenarios. And it's very complementary for a lot the H2 stuff, actually.
In fact, for server push, for example, preload is being used as a hint for when things should be pushed down from the server. You can use it on both, and I recommend using it on both protocols. You don't have to just wait for H2 to use it. So it complements both very, very well.
And they are hints, so that's one of the nice thing about the resource hinting that I really like is that because they're hints, the browser can make decisions based on what it knows about the context, what it knows about the person's browsing conditions, where they're at, you know, all of these things, and they can decide if this is going to be detrimental to performance then, it's a hint, I don't have to do it. So if the browser says, you tell the browser I wanna preload this asset, the browser says, well, this is on a bad connection, or this is like just, I'm so strained with everything else going on, I don't think that's going to be a good idea, it can decide not to do it.
So it's nice because it lets you pair everything you know about the site and application with everything the browser knows about the scenario and sort of the broader domain of sites. But yeah, going back to protocol, it works very well on both.
- All right, cool, thank you.
- Once again, let's thank Tim.
And Tim will be around.
You can come and say hi.
He's kind of hanging out throughout the next couple days. But thanks so much Tim for that wonderful start today. - Thank you.