HTTP2: The reckoning

(upbeat electronic music) - For most of us in this room, the year 1993 is incredibly important.

In some ways, it could be the most important year of our working lives.

Even though for many of us, most of us anyway, it happened sometime before or after this job. An advertisement appeared, advising people to telnet to a particular IP address. And once they did, the first ever webpage would load in the first ever web browser.

And given that we're in Australia in the year 1993, the internet was, well, a little bit in 1993, I imagine that for the locals for accessed it at the time, would have taken a little longer than a tipping might, cute little animation there, maybe six or seven seconds, but that's pure speculation.

So pretty basic, but it's the web.

It has all core requirements, right at the top, the heading for some HTML. And throughout the body text, links.

The very thing that makes the web, well, webby. If we jump forward 22 years to the week, we're all well aware of the limitations of the original HTTP spec.

So eight years after the project began, the HTTP2 RFC was finalised.

There'd been practical implementations of the spec for a while, so the standard was pretty rapidly adopted. In fact, by the time the HTTP archive started measuring HTTP2 usage in 2016, overwhelming 60% of the sites that they were measuring at the time was already using it just a mere 12 months later. And we all got a little bit excited.

The sun was shining, the horses running in the field, the hens were laying.

For web developers everywhere, the web was to become a happy place full of highly performing websites.

And immediately, all web performance issues were solved. After years of getting slower, the web finally started getting faster.

And now, all our web pages are visually complete before six seconds before we know we want them. And we live in a utopia.

Obviously that didn't happen.

Webpages continued to load slower and slower. And the reason I called this talk HTTP2 Redux was because I wanted to evoke the idea of the bad movie sequel.

The particular movie sequel I was thinking of was "Speed 2", which is set on a cruise ship.

And from all reports, is a terrible, terrible, terrible film.

And at the end of the film, the producer spent 25 million dollars, crushing a cruise ship into an island.

And I kind of think we've done the same as web developers. We've taken something very expensive to develop, that took a lot of people a long time to build, and we crushed it into an island.

So let's take a look at the un-mucked around, un-jokey version of time to visually complete over the last few years.

I'm reporting mainly in averages throughout this talk because that's what the HTTP archive reports. And I guess if you squint, it kind of improved around the time of the release of HTTP2, but really any advantage has been lost.

And to be honest, I reckon it has a little bit more to do with the archive changing their testing device from an iPhone 4 to an emulated Android Chrome browser at around about the same time that HTTP2 was introduced. All the stats I'm showing today come from the HTTP archives mobile report.

And in part, those reports, I'm using those reports because they contain a little bit more data and in part, I think mobile rather, has taken over desktop as the way people browse the web today.

And I also think it could be a little bit of a reflection of where I work, people read websites on mobile.

Because I work in the same squad for the product technology team for the "The Age", "The Sydney Morning Herald", and the "Financial Review".

And back around the time the HTTP2 spec was finalised, a very small team, that was then Fairfax Media, five, six people the first time, we were all in an office together set out with a lofty goal, to rebuild their entire stack from CMS right through to the various presentations, those that you see behind me.

And I wasn't an employee back then, I only very recently became an employee.

But set out on the ride with them, working on the CMS in a client services relationship at my previous employer.

And around 10 months after I started working on CMS, the "Brisbane Times" was relaunched.

And from CMS right through to grand overlays, a bunch of proprietary code was replaced with systems built in open source. And rather than slowly upgrade the website for a few months like most new sites do, the team launched a new design, upgraded to SSL by default, and enabled HTTP2 all at 8:08 p.m. on August the 27th, 2017.

And back then, HTTP2 still had a shine.

A lot of people were very excited, idiots.

I got very excited about it, telling group rooms, groups about this size, that utopia was coming.

And to be honest, I don't necessarily think that promise was wrong.

I think that we as web developers, took advantage of a new protocol and the natural improvements in the way the web worked, and we spent it. However, there was some good news, I do have a graph that shows an ever slight improvement over the last few years. Sorry we just have to wait for it to finish loading. (audience laughs) Slightly nervous I was gonna regret that.

But I guess I'll just have to describe it, we can't wait around all day.

And it's not so much as an improvement, it's at least staying steady, and that's timed first, content for pain.

That can be the moment that something actually renders on the screen. A canvas, as long as it's not plain white, so a grey block and some lines to indicate an avatar and some text may be coming. Then, like a lot of people, I've decided to game it, which really might as be about as useful as displaying a single poop emoji.

In 2014, it was moving up to the six or seven second mark and there was a slight downward trend for a few years, and I think honestly, that's when people started to game it. A completely symbolic improvement, but at least an improvement.

And really, even that's been lost in the last few years, and it started going upwards again.

It's kind of like people don't care about performance. If you're gaming it, users don't care about seeing something that pretends to be useful. So, as Patrick mentioned, and I'm just going over some of the things quickly, because a few people have changed rooms.

Some more meaningful stats have become available. At the top is the first meaningful graph, still pretty good, slower but not much.

That's usually text on the screen.

And then slower still is the time to interactive. You know, that moment you can actually move the page, or click the links and get a response from your actions, by which I mean, do things on the web.

Let's take a look at that.

Do we have an improvement? We may have some good news.

I actually think that that rapid drop there is because something clearly happened in July and August last year.

And what happened was that starting from July of last year, the number of sites measured in the HTTP archive increased 10 fold over the course of about six months. And they currently measure, according to their site, around 500 million sites.

And this graph says to me a few things, the most obvious being that the more data you have, the more accurate your average is.

But it also sort of displays where median can be less useful than mean.

And really, if you compare mean and median, I could be telling you whatever story I wanted. So the most meaningful stats come from measuring your own site, which is what I've been doing because I have my ego site because I have my ego. So I've been gathering performance stats on it throughout the year.

The bottom line is visually complete and the top line is fully loaded.

And I don't run the full stats on it because I don't really need to worry about it too much. If I blogged a bit more and got a little bit more than up to a dozen visitors per day, I might actually care.

But even without paying full attention to it, it's useful. It's fairly clear for example where I messed up the settings on my CDN and stopped serving jZip content for a month or so. There are services to help you measure it, but I'm just using a spreadsheet, just recording the basic facts.

And I'm not even recording the full set of metrics, probably ought to set up a few more.

I'm certainly not going to run lighthouse tests on it because I'm using webpage tests and I've discovered that it's quite possible to tie up and instance for a long time if you're doing multiple runs.

So I just copy the spreadsheet, decrying developer relations team shit.

My point here is you got to measure the stats on sites you're building.

If I had to guess, looking around the room, I'm not the only person here today working on large sites. You know, we go into a fancy city building, and we work on a large corporate site.

The business department comes in and discusses the business need and you end up putting some JavaScript online. And I know, I work in media, and it's an industry, especially guilty of this. But it'll be damn foolish of me to confess that I run Ghost on our sites.

And it would be especially foolish to run a comparison of with and without trackers.

Anyway, the business department comes in, discusses the business need, and you add some JavaScript to the page.

And the folks that recently did some analysis of blindly adding code to the page with regard to JavaScript, they compared the amount of JavaScript onto the page to the affect on its performance.

Over a mobile connection, each 100K of JavaScript adds a few seconds until the page is interactive. Most recently, the median amount of JavaScript on each page for a mobile site was a little under 400K, or four few seconds. Let's say around about the 12 second mark.

And I will confess, I don't think that the state of the web is solely on web developers' shoulders.

And it should be said, when I'm saying web developers, I'm firmly including myself because I'm just as guilty of some of these things as others.

There are, however, some ancient network based issues that come into play.

In many ways, the next improvements for HTTP can't come quick enough.

When talking about HTTP, it requires talking about network protocols, it is one. And I've said for a long time that even web developers need to understand the basics of network protocols. So I'm going to give a very high level overview, sort of an executive summary, ravelling HTTP3. With HTTP2, when you make a connection to a web server, a single multiplex connection is made over TCP. And that means that if there are connection problems between your phone and this server, the entire connection explodes.

It can be caused by any number of things.

The connection completely fails when the NBN goes down again, and your phone switches to 4G, earning a new IP address in the process.

Or it could just be a quick kickup as you change from one tower to another while travelling to work on the train, in which case your IP address may not even change. HTTP3 will change that.

Instead of using TCP, HTTP3 runs over UDP.

Unlike TCP, UDP kind of spits out a bunch of requests without waiting for a response.

And those requests, aren't necessarily order dependent, and the server can respond to it independently. You don't need them to travel over the same route. The server can then spit out its responses. And again, using UDP, the server doesn't wait for a response, it just hopes that they get there.

It's kind of like the difference between sending a letter by regular post and registered post. But there is problem with that.

The first thing, the first part of when you're connecting to a secure website using HTTP is called a handshake. And we all know without actually receiving your response to a handshake, it's just kind of awkward.

And is in the handshake.

HTTP takes these UDP connections and adds a protocol layer to turn them into the equivalent of a TCP connection.

The server and the browser get acknowledgements, while being able to take advantage of UDP.

It does this through a very complex system known as network engineer magic.

The advantage that this offers is that when a single connection fails, the overall connection to the web server will remain up. In a way, you get back that one advantage that HTTP1 had over HTTP2, multiple connections. And HTTP3 is all new and shiny and there are a few practical implementations going around. But eventually, the shine will fade.

And like any technology, once comes time to use it, you'll find you'll discover the limitations. Jake Archibald, is one of several people, who wrote about the limitations and the problems with HTTP2.

And when these things happen, it's really tempting to give up and forget about it, and spend the advantages that technology has to offer us. But we're all in this room today because back in the 90s, a bunch of people saw the promise the HTTP spec offered us.

As HTTP3 gains traction, and it's advantages become apparent, I'm asking you again, to look for the promise and not the problems. And the best way to do that is just screw it, do it live.

Now I'm not suggesting that we do this on our work sites, 'cause it's just not a sensible thing to do. But many of us have ego sites and they're perfect for this purpose.

This is mine, pretty basic WordPress instal, using the default theme.

Running on, frankly, a stupid over-engineered posting platform.

And I told a little white lie about it before. I said I accidentally turned off jZip.

No I was just curious about the affect on load time, and I wanted to test the effect of the unzipping overhead in the browser.

Stupid experiment, pointless.

But that's what personal sites are for, testing out dumb shit, just because you can. "Can I use" doesn't yet report HTTP3 availability, but if it did, it would look a little bit something like this.

Okay seemingly, a pretty resounding no, to the question of can I use HTTP3.

But the important fact is that it's in development. That's the most important bit.

In Chrome Canary, it's behind a command line switch, and Mozilla aim to have it in the Firefox nightly fairly soon.

For the server side, Cloudflare are already making it available for users to test on their sites via the platform. They've also made their nginx patch available for people who wish to test it out on their own servers. Again, personal sites, not work sites.

And because I like to experiment with mine, I'm enabled it for demands I can.

However, my main WordPress site has WordPress logging keys, and multiple origins. So I haven't been able to test it out there. Mainly because I'm a little bit tight fisted and don't want to spend request to quote dollars on my half-dozen visitors per day.

But if you're running a static site, and there's genuinely an amazing number of static site generators available these days, then there's no excuse not to give it a try. And shortly after CERN first advertised the world wide web, the web became place of experimentation and excitement. It lead to some bad things, like browser wars, but it also lead to some great things, too, like employing most of people in this room. And early on, these experiments happened using Windows 3.1, until the exceptionally stylish Windows 95 came along. Importantly though, it all happened over dial up. So we had to care about every byte on the page. I remember spending time reducing the size of images that started off under half a kilobyte. And I think it's a forgotten art that we need to bring back. In a recent article, Scott Jehl of the Filament Group, speculated that the end result of the introduction of 5G would be slower internet sites, or websites. And he seems slightly frustrated, so I've given it a new headline.

His thesis is, that we'll take the improvement in technology and we'll spend it.

And with a medium page write at the moment of 1.7 megabytes, it's really frankly something that we've done with 56k dial up, ADSL, 2G, 3G, HTTP2, and 4G. So why would anything change? He's right, it will happen with 5G, I'll add that it'll happen with HTTP3.

We'll take that cruise ship and we'll crash it into an island.

And it will be a reality and have actual consequences, which is unlike what happened when they do it in the second great movie.

Thank you very much.

(applause) (upbeat electronic music)