The FT app, along with many other HTML5 websites, blurs the boundaries between apps and websites, and opens a debate that still rages about what it means to build using web technologies today. As a 125 year old newspaper, The Financial Times needs to invest in technologies that will stand the test of time, and create user experiences as good as reading a physical paper.
So our aim is not just to create an efficient and powerful delivery channel for our own content, but also to protect and nurture the development and maturity of the web platform. Right now creating high quality user experiences in HTML5 is very hard, and to get to where we are today we need a huge bundle of hacks and extreme techniques, many of which I’ll cover in the session. In the future we hope these rough edges will get smoothed, in the same way that print production has evolved over a century to be the incredible logistical miracle it is today. Hopefully the web won’t take that long.
In animation and effects, there is a concept of the Uncanny Valley, where something is so close to realistic it’s bizarrely horrifying. This is a useful concept to apply to mobile applications which are trying to emulate native interactions.
Financial Times (FT) have been releasing content for a long time – newsprint was a highly effective mobile reading device!
Moving into the future we need to lose the constraints. Ipads are a technology solution that tries to emulate many other things within its own constraints.
FT use detection methods like identifying the presence of a mouse before adding hover-specific navigation aids.
FT have also discovered that people really like “native apps”, even when FT are just putting a thin wrapper on a web page. But in the end, because it was started with an icon and it was a polished experience, users thought it was native.
So, in a sense, “native” means “works really well”.
Three numbers to keep in mind:
- 16ms – this is the frame rate you must hit. After 16ms you get juddery animations.
- 100ms – this is the time the human brain requires to notice time passing. Anything which happens faster than 100ms will be perceived as “instant”
- Matching expectations – match what the user expects to happen.
These numbers are like a budget you spend.
Network. Whenever you hit the network there’s lag that’s 100ms+, which means the user perceives lag. So do all the known things – async everything, load scripts after onload, reduce http requests (sprites and concatenation).
[Sidenote – “I’m assuming everyone is doing that already” he said… in my experience people still aren’t. They often use tools (blog tools, CMS, etc) which don’t do it for them; but don’t give them access to do it themselves.]
Beyond this, there are other sources of latency – radio has different power states, so sending requests at longer intervals hits low-power states, where the device has to warm back up. This is not only making things slow, it’s using battery.
Solutions?
- request batching – collect requests asynchronously; use callbacks per action and per group; process queues in the background
- in theory https 2.0/SPDY will make this unnecessary; but it’s not usable yet and doesn’t handle CORS
- there’s also the W3C Beacon API
- there will also always be a use case for delayed requests
Images
- typically 70-95% of web page data
- use accessible responsive images technique
- when creating high and low resolution, try using high resolution images with high compression – this may give better filesize (JPG at least) for the smaller resolution as well (compared with compressing the smaller image separately)
Third party scripts (statistics, advertising)
- you can really lose a lot of good work here – you can’t stop them bringing more stuff down, polling with timers, etc
- Two things to do: test your site before and after adding the third party scripts; and ask questions before you add a third party script
Andrew is creating a tool that evaluates third party scripts “3rd Party SLA”. Gives an easily-readable report on the impact of a script.
Prefetching
- Each request has four phases; and we can do prefetching in each phase for the next
- There are meta tags to help this – dns-prefetch, subresource, prefetch, prerender (Chrome only). This can create the appearance of instantaneous performance
Rendering
Chrome has the best tools for measuring rendering:
- Timeline
- This showed FT that old flexbox was really slow to render (100ms+ to lay out flexbox)
- New flexbox however is really performant
- Hover effects during scrolling are really expensive and create janky scrolling – so you can turn them off. Use JS to apply a body class and namespace your hover effects.
- Framerates (activate in chrome://flags)
- This identified an expensive combination of border-radius and box-shadow on a single element. This was within the layout boundary which is the area the browser has to re-layout when you change something.
- You can create a new layout boundary and avoid the expensive relayout
- See article by Wilson Page for more information about layout boundaries
Layout “thrashing” – when we make more DOM changes than we need.
- Browser will batch writes for you; but if you read something (eg. A height) you force it to write as well.
- So don’t interleave reading and writing actions – do all the reads first, then all the writes after. This should greatly reduce the amount of hits on the DOM
- What about asychronous DOM? Wilson’s FastDOM library gives huge gains (15ms down to 2ms in the demo)
Images in the browser
- Image decoding is probably the most expensive thing you ask the browser to do.
- Don’t load images on demand for mobiles – it kills the battery; and also if you lose signal you don’t have the whole document. When you scroll down, you find the images are all just placeholders.
- There’s no browser API to load all images and not decode them? …there should be!
- FT polyfill by downloading data URIs, they’re encoded on the server, loaded via XHR, then inserted into <img> as a string to trigger decoding. This can fool the browser in funny ways though, you can lose browser-side optimisations. Use with care.
Hardware accelerated transforms
- you need the GPU if you’re going to animate a move, scale, filter, rotate
- GPU-driven transforms don’t need repainting to move things around
- You can force webkit to put things into GPU with
translateZ(0)
hack
- (measure with Timeline → Frames in chrome devtools)
Storing data
- Native apps are good at storing data, because they have access to the file system. On the web we have a large range of not-very-good options. They’re a dysfunctional family of options.
- [Brilliant analogy, see the recording for this one :)]
- So which do you use? Use the best storage backend for the kind of data you want to store. But it tends to be slow.
- There is a new standard coming – currently named ServiceWorker, but it’s had a lot of names. But it’s a highly promising standard. It’s like a server in the browser; takes a while to understand it but gives a great deal of flexibility.
Storage optimisation
- There are slim pickings for HTML5, native apps have a lot of advantages here.
- FT did some interesting tricks with encoding – packed ASCII into UTF-16. Very small JS to encode and decode which is very quick. This means you can store a lot of data in offline storage in a very size-efficient manner.
Click delay
- More click, less delay.
- New paradigms like double-tap to zoom required a delay to see if they were occurring. If you’re just clicking, 300ms delay is annoying. So you can use something like Fastclick – noting very little of the library is removing the delay, the rest is adding back all the edge cases.
- The 300ms delay, remember, is three times our “instant budget”.
Perception
- Once you are done with things that make the app actually faster, you can start making it seem faster. You can trick the user’s mind into the perception of speed where none really exists.
- FT use different styles of progress bar depending on how well the request/download is actually happening. Different animations give the user a greater sensation of what’s going on; when things aren’t fast it looks more thrashy, when it has failed there’s a retry button.
@triblondon, @ftlabs
Q&A
Q: has there been much actual research about real vs perceived performance?
A: they have done some research using classic A/B testing, the only way to test perception is with real users.
Q: to get that performance do they hand-code js, or use frameworks?
A: rather than big frameworks, they use smaller, more targeted tools. Website loads jQuery but only because the advertisers require it.
Q: how do you handle the OS specific paradigms in the UI?
A: it’s really important, the expectations of users on different platforms do vary. They mostly deal with it by being intentionally different. This is a great way to avoid the uncanny valley. Rather than emulate the OS badly, do a different design very well. Sometimes, eg with scroll physics, they are very different so you choose a middle ground.
Q: does the fastclick library handle onclick events and so on for analytics?
A: Think so but don’t know for sure. Unless it’s detecing binding in an unusual way, it should work.
Q: how big is your team, and how do you get them to keep all this in mind?
A: They catch it in QA. Would love it if everything was performant from the start but it’s not always possible; and performance is not always logical or obvious (eg. The flexbox problem). You write code that makes sense; then you test it; then you write fast code…