- You know, a lot of the time we're in the web, you know, especially in the more of the design end of the web, we think about the surface, you know.
We're thinking about the HTML, CSS, and the interaction layers, but in fact, as I've alluded to, and we talked to a bit about, last night with security side, this whole stack, to use a terrible term, of technologies that we work with, are important.
There's actually a layer that we tend to ignore entirely, which is what happens between our devices, and the servers, or as we call it these days, the cloud.
For a long time, we've just taken it for granted. The underlined technologies have been pretty stable for almost two decades, but a lot has been happening at the world of HTTP, over the last couple of years.
If you've been to our code conference, we've had quite a few sessions, around these changes.
All of them are come, gone from being speculative, and the sort of thing you may be should be thinking about, to, basically, something that has arrived.
HTTP/2, as we're about to hear, is the first major change to HTTP in, pretty much 20 years, is now really in most browsers, and in most servers. Course, it falls back to older HTTP, if it's not available, but it also has significant impacts on how we design and build for performance.
To start this session, we have Peter Wilson. Works here locally at Human Made, he's spoken at Web Directions before, he's been building web stuff for many, many years. He's gonna come along and talk to us about this layer, and what impact it's gonna have for us, when we build on the front end.
To talk all about that, please welcome, Peter Wilson. (audience applauding) - So, before I start, I don't know about you, but I thoroughly enjoyed Jim's session this morning, and all of yesterday.
Let's put our hands together for John and the Web Directions team for such a great event.
(audience applauding) For most of us in this room, the year 1969 is incredibly important.
In many ways, it could be seen as the most important year of our working lives, even though for many, maybe even most of us, had happened some time, before we were born. Around 9:30, on the night of October 29, 1969, a group of researchers from UCLA sent their first message on the ARPANET, which was the predecessor to the internet, to the University of Stanford, in San Francisco. The message they sent, that half the length of California, was simple.
Login.
With the sending of a five letter message, path to the internet, and, eventually, the world wide web, had begun.
It was an inauspicious beginning, because the message that actually made it to Stanford, was lo.
Performance issues have been with the web, from the start. That's what I'm going to talk about today.
The direction the web is going, is something that concerns me a great deal. Outside of developing, as John alluded to, websites back in the 1990s, and back then, we had a maximum download speed of 56 kilobytes per second, which was very much a theoretical maximum, at best. When old man internet, John, (laughs) talked about, caring about every byte on the page, back in the day, it's because we had to.
We had no choice.
I still think about bytes on the page, in my role as WordPress engineer at Human Made. We make high-end sites, using WordPress, often using it as a headless CMS, and delivering content via an API.
As far as performance goes, the web's in the midst of a transitional period. With the release of the spec for HTTP, version one, and the increasing, but not universal, browser support, we need to consider how server configuration influences our front end code.
As we're in this transitional period, code for one circumstance, may cause problems for code, for another.
We need to know, even front end developers, need to know the basics around HTTP protocols. That's how I see myself.
I see myself as a front end developer, who knows enough backend to be dangerous.
It's the front end that I'll be focusing on today. I'll cover the basics of HTTP protocols, but I don't intend to do a deep dive into server configurations, and squeezing out every last optimization from an Nginex config file. Servers are really hard.
I'll be looking at the current state of the web, and where we stand in terms of performance, as we transition from HTTP, version one, to version two, what this means for you and your employers. I'll then look over measuring performance on your own sites, and the key measurements, and what they mean. Having looked at the speed of the net, and the speed of your own site, I'll then look at the two versions of the HTTP protocol, and how the presence of the two protocols effects the need for performance, turning your page.
Since late 2011, the HTTP archive, which is run by the same people that run the internet archive, have collected stats on the Alexa Top 1 Million sites. Twice a month, they visit these 1 million sites, both on a desktop, using a broadband connection, and on a mobile, simulating, a mobile connection. They stated the names to record, not just the content of the web, which they do in the internet archive, but how that content is delivered and served. That's recorded in the HTTP archive.
It focuses our attention on the state of front end web development, including how we, as designers and developers, are changing the web. With this focus on the front end, let's take a look at what's happened to the bytes on the page in recent years.
In April, last year, the average page weight on the internet, past two megabytes.
We're currently sitting at a tad under 2.3 megabytes. We'll definitely pass 2.5 megabytes, by the end of this year.
That's for each and every page on the internet. Mobile users fare slightly better, with the current average page weight, with a site delivered over, to a mobile, being 1.2 meg.
The average weight of a page, is two and a half times what it was in 2011, just five years ago.
There's any number of statistics I could show you, to demonstrate how we, as web designers and developers, have damaged the web.
I said damaged.
I could show you the slowly, but steadily increasing number of assets we're using on a page.
Currently about at about a hundred, but I won't show you that.
(laughs) I could show you the fifteenfold increase in the use of webfonts, or any number of the 44 data points collected on the HTTP archive, but I'm not gonna show you them, either.
(audience laughs) Because they're factual, but meaningless.
Together, what I want to do, as a room, is I wanna try an experiment.
I want to sit out the average loading of a webpage, in nothing more than awkward, awkward silence. (audience laughter) Let's go.
That's 4.2 seconds before render begins, 12.7 seconds before the page is visually complete, and 15.2 seconds, before the page has finished loading. It's worth remembering, that this is the average load time on a wired, broadband connection, using a desktop. On a mobile connection, surrounded by AMF interference on a tram or a train, it's going to be slower.
Now, look, under no circumstances, am I going to dare suggest that everything about the web was better, back in the day.
However, I will complain, I will claim that as designers and developers, we've been taking liberties and making allowances, that far outstrip advancements in bandwidth. As a user, performance is an inconvenience. Sitting on public transport, looking at a screen like this, with a blue progress bar, apparently, going backwards, or for, iOS, sometimes even literally going backwards, it's incredibly frustrating.
After a few moments, the user gives up, and switches to a competitor's website.
It's at this point here, as the user switches to your competitor's website, that performance starts costing you money, and, yes, I mean you in this room.
Without meaning to get all neo-liberal, trickle down economics on you, even for the employed here today, an effect on your company's bottom line, affects how much they can pay you.
In case study, after case study, has revealed the effect of performance on revenue, through declining conversions.
A few years ago, Walmart acknowledged, internally, that their site had performance issues.
Those performance issues became apparent on high traffic days, such as Black Friday and Cyber Monday. Walmart found that their conversions declined exponentially, the longer users had to wait for the page to load. Those first four seconds, were a real killer. Shopzilla, they increased revenue by 12%, by just improving load time, by a few seconds. Yahoo! increased page views by 9%, with a improvement of a load time of only 400 milliseconds.
It's been calculated, that it would cost Amazon an inhumane amount of money, were they to slow down their site, by a mere second.
To put that lost $1.6 billion in sales, annually, into perspective, in 2013, Amazon founder Jeff Bezos purchased the Washington Post company for $250 million.
That one second slow down, it would cost Amazon that, every 55 days.
In percentage terms, Amazon calculated that the, there was a 1% drop in sales for every 100 milliseconds, additional 100 milliseconds for the site to load. Even on small sites, giants like this, from small improvements in performance, can really add up.
It doesn't take a great deal of imagination to see why improving your site's load time, could lead to a tidy, little bonus, come Christmas time.
Which is a pretty nice side effect of making the web a better place.
Once you decide to improve performance, you need to know where your site stands.
To find out, what is slowing your site down, and discover where the easier wins are.
Because, why refactor code to save 100 milliseconds, if removing a blocking request will save you three, four, five times this amount.
You'll hear people talking about page speed score as a convenient shorthand.
I include myself in this.
It's a single number that provides relatively minimal insights.
To avoid calling anyone out by name, I'm going to use my own site to demonstrate measuring a page speed score, and performance, in general.
It may look like I'm doing this to brag.
Huh.
It's actually to demonstrate how easy it is, for these things to get out of hand, if you don't continue to pay attention to performance. For more meaningful metrics though, I prefer WebPagetest. It's a much more convenient tool for measuring the effect of a change.
Pretty much, enter your URL, and you go.
There are a bunch of locations and other settings you can set on WebPagetest.
However, when you're measuring the effect of a change to your site, what you want to do, is you want to keep all those settings on WebPagetest consistent, just to get the effect of the change down.
When choosing a browser to test in, I take my lead from Google Analytics.
For my site, that means Chrome, iPads, iPhones, but client sites aren't visited almost exclusively, by middle-class internet professionals.
For the clients, the browser selection will often vary. When you first run WebPagetest, you're presented with this, in Key Metrics. I've highlighted the three on the screen, that I think, are most important.
All related the user's experience of the page loading. Time to render starting, render completing, and for the page to finish loading.
These are the three that I highlighted earlier, as we were waiting for the average webpage to load. The numbers on this screen are from when I added a blocking HTTP request to my HTML Header.
This is kind of the advantage of experimenting on a personal site.
You can make a quick change, even really dumb shit, just measure the effect of it, and then, revert it all nice and quickly.
However, on WebPagetest, I tend to spend more time looking at the timeline view, because my biggest concern is how the users experience the website loading, which is often referred to as perceived performance. As long as the text is readable and the cause to action are working inside a user's viewport, it doesn't really matter what's happening elsewhere on the page.
One of the more helpful features on WebPagetest is that you can compare two or more timelines. As I mentioned, I added this blocking HTTP request to my, HTML Header.
As it happened, sort of, switching between an external CSS file, and critical path CSS, and compared the effect.
About the timelines, you see the timestamp of the page loading, which is, in this case, half second increments, but it can be more or less frequent.
Below both timelines, you see how much of the content inside the viewport, is rendered.
This is a, equates nicely to the perceived performance, that I like to measure.
Using the comparison, I can quickly see, that adding this blocking CSS request to my HTML Header, added, roughly, a second to the load time.
On my site, that means doubling the time, before the user can read the content.
WebPagetest has their own single number metric for, as a sort of unit, for measuring a page's performance. The speed index.
It calculates the average time for each element, inside the user's viewport, to become visual. A lower score is better.
However, for me, those single number metrics, represent a convenient clickbait score, for selling the idea of improving a website's performance, internally.
But, it's the full data from tools, such as WebPagetest, that are going to help you work out your shorter and longer term goals, and discover where the quick wins are.
To understand how front end code affects performance, it helps to understand, a little bit, about what happens when a browser requests a webpage, which is the HTTP protocol.
Today, I'm going to start with HTTP, version one, because as I'll cover in a few moments, that's how most data is transmitted on the internet today. According to Section C of the Talks About Performance handbook, I should be showing you this diagram right now. It shows you that, following the DNS lookup, the web browser makes two round trips to the server to get the first, make a connection.
Kind of shows this, without showing anything. I prefer to demonstrate things.
Let's switch to the Command line.
First, the browser does an NS Lookup, and the name server responds.
This isn't even on that diagram, this happens beforehand.
Then, the web browser connects to the web server, and the web server responds.
This is the start of the connection, the first round trip.
Having connected to the web server, the browser then requests the page it's after, in a protocol it understands.
In this case, HTTP, version 1.1.
It tells the browser the name of the site it's after, because there can be a number of sites hosted on anyone's server.
The browser includes a bunch of other information. The browser's UA string, cookies, all sorts of information that are unimportant, when you're running it in the Command line, like this.
This is a start of the second round trip.
This is when the server responds with the webpage. The browser then needs to repeat this process, for each asset on the webpage.
It can make moldable connections in parallel, and reuse them if needs be, but for each new connection, it needs to go through that original round trip, to connect to the web server.
A big problem with HTTP, version one, is that the browser initialises every one of these connections, which makes it very easy for the web browser to block itself, from proceeding, as it needs to collect the next asset.
The HTML needs to download, before the browser knows to download the CSS. CSS needs to load, before the browser can start rendering the webpage, which is that green line there.
Once the CSS loads, well, the browser discovers that it's got images and fonts to load, and thus, block the loading of the JavaScript. The JavaScript, well, it triggers the loading of an Iframe, with HTML, CSS, JavaScript, repeat, repeat, repeat. This is all conveniently ignoring JavaScript, and all the other items that can block render, from a webpage rendering.
This is that same waterfall, as on WebPagetest. Start of render to document becomes complete. This is an example of where the browser is blocking itself. It needs to think about things, before it can proceed. Which is, quite frankly, a waste of 200 milliseconds of my valuable time, or in Amazon's case, around $30 million.
I've been saying $30 million, for quite some time. I've just realised that I'm out, by a mere factor of 10, and no one noticed, because these numbers for Amazon are so big, that they're incomprehensible.
For Amazon, that 200 millisecond delay, 200, $300 million, annually.
Anyway, performance doesn't matter.
(audience laughs) By now, if you've been reading the blogs, and listening to the podcasts, you will have heard of HTTP, version two.
As you research HTTP, version two, you'll come across talks and articles with titles like this. Everything you know about performance is now wrong, and former best practises are now an anti-pattern, and considered harmful.
I think titles like this are more than a little bit misleading, 'cause I've taken a look at the statistics, and I don't think we're quite ready to forget everything we know, just yet.
Let's take a look at some of these stats, to see where we really stand.
This the Can I Use page for HTTP, version two. It looks really nice, there is a lot of green on this page. But, let's take a closer look at the statistics, in the top right corner.
A little over 60% of traffic on the web, is using a browser that fully supports HTTP, version two. I don't care about, you know, browser additions, I care about traffic, which means that a little under 40% of visitors to websites globally, are using a browser that does not fully support HTTP, version two.
Right now, to stop considering HTTP, version one, is similar to not considering all the IEs, all the Firefoxes, and all the Safaris, which is desirable.
Highly, highly, highly desirable, but not at all practical.
(laughs) While considering browser support for HTTP, version two is important, it's worth noting that web service support is a bigger hindrance.
At the moment, a little under 7% of websites are running the new protocol.
If you work in client services, or work on a product that gets installed on your client's servers, or frankly, if you didn't upgrade the server yourself, then there is a good chance that your work will be running over the old protocol.
This is what global traffic looks like, when you combine the browser and the server stats. It was an experience reported by Cassidian, back in October, when they became one of the few sites, that enabled HTTP, version two.
Looked, as a whole, it's all, kind of, disheartening. But there is good news.
If you've got a web server running HTTP, version two, this is what your traffic will look like.
A large number of your users will be using the new, improved, shinier protocol.
Unfortunately, during this transitional period, a large number of your users will not be, and that's okay.
We all deal with this kind of issue daily.
It could be creating a PNG fallback for an SVG, or providing a download link, as a fallback, for the audio element.
Let's take a look at the difference, when making an HTTP, version two, connection.
The problem with HTTP/2, is that it's a binary protocol, so we do have to fictionalise any look at the protocol, because none of you really gave up your time today, to look at a screen like this.
Once the browser connects to the web server, well, most browsers require the HTTP/2, run over a secure connection.
So, the browser starts off by negotiating that secure connection, and the TLS handshake does add an extra round trip to the equation.
Although, with HTTP, version two, it's worth noting that upgrading from HTTP/1 to HTTP, version two, adds this extra round trip, even over unsecure connections.
As with earlier, I'm showing an abbreviated version of what the data that the browser sends.
The server responds to the TLS handshake request, with details about its certificate.
Along with the certificate's details, it indicates that it supports HTTP, version two. As you can see, on the last line onscreen.
The server includes a lot more information, because negotiating a secure connection on the open internet is rather complicated.
At this point, there's a slight divergence. Browsers that don't support HTTP, version two, continue on their way, merrily using HTTP, version one.
Browsers that do support HTTP, version two, complete the encrypted connection, and immediately upgrade to the newer protocol. In the same sort of connection to the server, it starts requesting webpages, much as as it did before. The difference being, as the first asset is being returned, the browser can continue to request other assets, and the browser respond to their requests, over the same connection, without having to wait for the connection to clear, or to make any additional connections.
Again, this is all happening as an encrypted, and as a binary connection.
The real benefit with HTTP/2 comes from the fact that it's all one persistent connection.
The browser doesn't need to make a connection, every time. You may have heard, also, of HTTP, version two, server push.
A technique in which the server sends resources, that the browser has not yet requested.
It happens over the same connection, but when the browser requests the first asset, the server will respond with that first asset, and also any additional assets, that it feels are needed, say the JavaScript and the CSS.
While HTTP, version two, will result in significant performance gains, server push will amplify those gains a great deal. Unfortunately, both the major web server implementations do make you work for this.
Apache's entire HTTP, version two, module is a valuable, but listed as experimental.
Nginex needs to be proxied to support server push, so, you really need to nag your system admins, and make sure that they do this.
Even without server push, HTTP, version two, will improve the performance of your site.
Server push is merely the cherry on top of the HTTP, version two, performance cake.
Once you have HTTP, version two, they almost even split, between visitors using the old and the new protocol, that becomes a problem.
Because our headline from earlier was over the top, there's a tiny element of truth to it.
Improving performance for HTTP version one, can, and I really want to emphasise that word, can, have a negative impact for traffic over HTTP, version two. Which, in theory, leaves us in dire straits. However, as professional web designers and developers, we don't get our money for nothing.
Or, in terms of our conversions, our clicks for free.
(audience laughter) We need to accept that front end code, and HTTP protocols have become the brothers in arms of performance.
(audience laughter) That completely went over their head in Sydney. (laughs) Here, on the screen, I have a really good example of the conflict between the two versions of HTTP. That's critical path CSS.
It's a technique that includes the most important CSS, that which stores elements inside the user's viewport, in line, in the HTML Header.
We saw the positive effect earlier, as that was the change I made to my site, when demonstrating WebPagetest.
I mean, look at it, it's beautiful.
In line CSS.
(laughter) As I've mentioned earlier, CSS blocks render. This is true, regardless of the protocol.
On HTTP, version one, it kind of looks a bit like this. It looks exactly like this.
But, by including the critical CSS inside the HTML Header, we do add to the initial file size of the HTML. We can improve the time to render substantially, because the remaining CSS gets loaded asynchronously, after the visitor has started reading your content, and deciding to act on your cause to action. In HTTP, version two, with server push enabled, the CSS starts downloading, alongside the HTML. Putting the extra CSS in the HTML Header, merely increases the file size of the HTML. By not including it, it's possible to improve the perceived performance of the webpage.
Let's take a look at how to take advantage of HTTP, version two, server push.
It's relatively straight-forward to implement, and falls under the W3C's preload specification. The preload specification does also have some details for HTTP, version one, sections, but this is largely unsupported by browsers, so I'm just going to let that go through, until they keep it today.
On an HTTP, version two, server, you merely need to include some, you indicate to the server to push assets, with a preload HTTP Header.
The web server basically does the rest.
In my PHP-driven world, this is what it looks like. On this slide, you'll see that I've used absolute paths, because, as is often the case with HTTP Headers, using relative URLs can be problematic.
Apart from that triviality, it's all quite simple. I'll note, just quickly, that I'll be using PHP start code samples today.
I know that not everyone uses PHP, so I'll keeping them relatively straightforward, and slipping into pseudo code, as need to be. Mainly when function names or the code is not easily passed on the screen.
In an example of the said pseudo code is HTTP/2 function. It's really easy to work out the protocol that's in use for a particular session, but it's not so easily passed on the slide, so I'm simplifying it, or lying.
(laughs) Disclaimer done with, let's thing about, though, what's wrong with this code.
It instructs the web server to push the two additional files on every load, without considering the state of the browser's cache.
So, if the file is already in the browser's cache, our efforts to speed up our websites, which resulted in unnecessary data being transmitted, across the wire.
Most server implementations of server push, lack the smarts to determine if a file is in a browser's cache.
Browsers can cancel the transmission, at any time, if it is, but by the time they do, data has been sent, across the wire.
The solution is simply to check if the file is cached, and only attempt to push it, if not.
From now on, I'm only going to switch from showing the one file, rather than showing both the CSS and the JavaScript H time.
Unfortunately, there's no is_cached function to, in any programming language, to find out if a file is in a browser's cache. It's not something that the browsers report, and for security reasons, nor will they.
To fake the cache detection, we need to set a cookie, indicating the file is likely to be cached, by the browser.
In this example, I'm just setting a cookie with the same name as the file, and just setting the value of that cookie, the cached. If you've got a file with a version that changes regularly, you might want to set the value to cached.
Then, our function is_cached, simply becomes a check for the existence of a cookie.
On a production version, I'd write some additional code, but exactly what that code would look like is very much dependent on the site specific. This is a simplified version of that, taken by the web browser H20s CASPer module. Cache-aware server push.
It uses the same faux cache detection, using cookies, but it hashes the cookies to reduce the file size. That certainly an approach I'd take in production. Having highlighted critical path CSS, as typical of the opposing needs between HTTP, version one, and version two, I'm going to look over the transitional approach, the HTTP/1.5 version, if you like.
After the initial render, the full CSS file needs to be loaded, asynchronously by the web browser, to render the page, outside of the user's viewport. Unfortunately, browsers don't have a nice sync attribute for CSS files, so you need to load the file using JavaScript. Those clever bunnies at the Filament Group have a tool for this, which is, rather creatively called, Load CSS.
Over HTTP, version one, we want to achieve this, using critical path CSS.
Over HTTP, version two, we want to use server push. In terms of the HTML, this is, the HTML should look like, if the CSS file is uncached, and we're using HTTP, version one.
The critical path CSS, followed by the fout file, and setting a cookie via JavaScript.
To allow for the JavaScript impaired, we then load the full file, as we normally would, inside a nice script element, because, we're not animals.
(light laughter) This is what we end up with, in code.
An If statement to check if we should try and push the file, and any fails condition, to check if we should include the critical path CSS. The apparently opposed conditions between HTTP, version one, and version two, are dealt with, with a couple of relatively simple if else statements. Once the CSS file is cached, we can get back to using the HTML Header for its intended purpose.
There's the flying favicons.
(laughter) Wouldn't it be nice if that was exaggerated? (laughs) As I mentioned, over the past few years, thanks to services like Typekit, Google Fonts, and, unfortunately, since today's training, Fontdeck, web font had a huge increase, on the web. The use of these web fonts have really improved the aesthetics of the web, but they've had a very much negative impact on performance of the web.
You'll have all seen sites, that look something like this. The site is loaded, the images have loaded, the content has loaded, but the font has not, so we can't read anything.
It's incredibly frustrating, as a user.
Looking at something like this, it's easy to see why visual progress is considered important, by the performance measuring tools. For a well performing site, the visual progress should look like this.
The page starts loading, and is generally pretty readable by the user, fairly quickly.
Towards the end of the page load, the larger assets finish loading, and the page becomes officially complete.
For a poorly performing site, in which the web font blocks render, visual progress, is basically, averted, and the user can't read anything, until right at the end. In achieving this, in stopping your user from visiting anything, it's incredibly important, you only need to follow the nice, promenading instructions on Google Fonts.
Ironically, Google Search will penalise site ranking for this, but they ain't aware of such ironies, is only vaguely satisfying, when it won't help improve the situation.
(audience laughter) For new users of external font hosting providers, it's worth noting, a couple of minor, additional issues. Using an external server to host your fonts, does mean that you cannot use server push, if you are using HTTP, version two.
Using an external server, essentially, forces your browser to initialise the connection, as it does in HTTP, version one, with all those round trips. Even if the new server supports HTTP, version two, that first connection is very much HTTP/1-esque. Anyway, back to our web fonts.
This is what we want to achieve.
Rather than everything but the text, we want an effect, where the page loads, with a standard website font.
Once the web font loads, the font switches out.
This is the flash of unstyled text, fout.
Google and Typekit have provided the Web Font Loader, to assist with preventing fonts from blocking render reveal websites.
Google Fonts, it's hidden away quite well, that Google Fonts does provide some JavaScript code, for linking to this option.
However, these days, there's a more performant option than the Web Font Loader script.
The more modern approach is to use Font Events. Unfortunately, we face the same 60/40 split, thereabouts, that we face with HTTP, version two, and many of the other modern APIs.
We have a little bit of work, very little work to do, to support everyone.
Font Events are a JavaScript API, so we can use a polyfill, to achieve 100% browser support.
Bram Stein has developed the Font Face Observer to act as a polyfill for your browser impaired visitors. The Font Face Observer script is around a quarter of the size of the Web Font Loader script.
It does require, the Font Face Observer script, does require promises, so you need to include the promises, it's a little bit larger, if you need to include the promises API polyfill. Following Bran Stein's script, you just set up an observer for your fonts, and then, using promises, you check if the font is loaded, and add a class to your HTML element, when you have done so.
You'll notice that I also set a cookie to indicate when the font is loaded.
I use it to add some flexibility to how I run the JavaScript.
So, our CSS from earlier, changes from this to this, to include a indicator, to include a class, to indicate when the web fonts have loaded, and to only attempt to use the web font then. This is where the cookie that I set earlier, comes into play.
I can use it on Page Look to detect if the font files, and the CSS are are likely to be cached, and avoid the fout on the second, and subsequent page loads. Using JavaScript to load the web fonts, does mean, that they may not load for the JavaScript impaired. But, presented with the choice between readable text for all our visitors, and a slightly different font, for very few of them, versus, a white screen and blocking the content from all your readers, practically begging them to go elsewhere.
I'd encourage you to let aesthetics take the backseat for a few of your visitors.
For locally hosted fonts over HTTP, version one, you'll still need to use Font Event and cookies to control the loading of your web fonts, and ensure that you don't block rendering of your site. If you're using HTTP, version two, with server push, you could still benefit, by considering the file size of your font files. Even taking full advantage of server push, if your font files are larger than the HTML, then you may end up blocking the render of your site. In which case, your site can still benefit from Font Events. The performance gains from HTTP, version two, present us with a new opportunity, to speed up our website, for growing number of visitors, taking advantage of the new protocol.
HTTP, version two, presents us with a new opportunity, to again, take advantage of a new technology, to excuse, yet more bad behaviour.
It probably seems disconnected, but I'm sure you've all fought against the issue of the iPad and the iPad Mini, being the same size in code, despite the fact that they're demonstrably different sizes. It's called a Mini.
The iPad Mini is an example of where we took a new technology, and we broke it.
It may not have been this code exactly, but when the iPad was first released, many of us, and I'm putting up my hand, me included, wrote something like this, to change our code from a phone layout, whatever that is, to a tablet layout, whatever that is, at exactly 768 pixels. Which left Apple no choice, when they released the iPad Mini, to stand in front of the web developers of the world, and tell us, that it was the same size, but different.
Where professionals, the world over, learnt their lesson, but only after the horse had bolted.
We used web fonts to do things they weren't intended to do, and break the web.
HTTP, version two, offers us two opportunities to break the web.
(laughter) One, we can decide performance is no longer an issue for any of our visitors, or two, we can just ignore the 40% of our visitors, that are going to be stuck, using the old protocol, once we upgrade our servers.
Which brings us back to our headline from earlier. To keep our sites performing, we will always need to consider the impact of every byte on the page. Over the next two or three years, not only will we have to think about how they, how bytes on the page affect visitors on a fast or a slow connection, we'll also have to think about how they affect visitors on an HTTP, version one, and a version two, connection. At times, that will be really annoying, at times, it will be frustrating, but I don't know about you, but that is exactly why I love my job, and working in this industry.
It's always interesting.
Thank you very much.
(audience applauding) - Fantastic stuff.
Thank you so much, Peter.
I'm just having trouble, I'll tell you what, we only have time for one question, while we set up with Michael, and there's a lot in there to unpack, I know. So, come and buttonhole Peter later.
In scroll, you'll find an article, based on a lot of the thoughts that Peter had.
Hopefully there's some value in that.
As I've mentioned a number of times, we'll have this all videoed, so there's opportunity, afterwards, to follow up on a lot that was in there. Anyone going on about HTTP/2 now? Still holding out? During our conference up in Sydney, someone actually turned on HTTP/2, during Peter's presentation, on a really, relatively significant website. I'm not sure if I can mention who it is, but it's a quite relatively major media organisation. We asked him, "How's it going?" He said, "Well, it hasn't broken yet." So, it's also got a lot easier.
We have time for one question, perhaps.
I left my glasses down there, so I basically can't see anyone.
Has someone got a question before we start Michael, or should we maybe we'll roll on.
Alright, we got one over here.
This is more like an observation really.
(laughs) - [Man] I just want to ask, how does HTTP/2 handle image loading? Is it much in the same way, where you could request that much beforehand, or? - The load runs over the same connection, but if you wanted to, you could include that in the preload headers, and get the server to push that, if you've got server push enabled on your server. It's basically the same way as any other asset, it's considered the same thing.