Server and client execution are fundamentally different; each has distinct strengths and weaknesses. Building fast applications means designing systems that combine the strengths of both of these environments. We’ll look at how Facebook is doing this with abstractions like GraphQL and Relay, the specific problems that each library solves, and how we can apply the same design principles to other performance problems.
Josh works on JS performance at Facebook. It’s a big challenge with such a big application, with so many moving parts; but also without limiting what engineers can do.
Talking about more than HTML5, JS and the browser… looking at the whole web architecture.
Our phones are basically supercomputeres – more powerful than Deep Blue that beat chess master Kasparov. So why not do everything on the device? Being connected is more important than raw power. Without connection our phones feel dead.
Why the web has worked so well for so long?
I found it frustrating that in those days, there was different information on different computers, but you had to log on to different computers to get at it. Also, sometimes you had to learn a different program on each computer. So finding out how things worked was really difficult. Often it was just easier to go and ask people when they were having coffee.
– Tim Berners-Lee (Answers For Young People)
The web is more than the browser. Without the server we’d be doing some fairly heavy and inefficient things to get information. Servers and URIs let us look up just the little bit we need at the time.
We moved on from static file serving, to on-server databases and full applications. The server had to learn to respond to user interaction. The tools people had at the time (like Visual Basic) just didn’t work for the web.
We re-learned how to design applications, so we could design apps for servers. The web became huge on the back of the server-side rendering model.
Now we’re in a new transformation, from server-side to client-side rendering. This avoids latency for certain interactions.
But by moving all the logic to the client we’ve created a new problem: load time. Plus we still have all the old problems like janky experiences.
We’re still working out how to make all this stuff fast.
Native apps have set some terrible examples though – apps download tens of megs of code and we can’t do that on the web as people won’t put up with the initial load. Plus if you cache the code, you have cache invalidation problems and generally blow up the value of caching in the first place. Should you prefer a warm cache or regular updates?
So how can we fix startup time?
The client-side JS world has broken the problem the web solved – you no longer download just the bit you need, you have to download everything first.
Facebook looked at the strengths of the client and server. The server’s great at controlling the cache; the client’s great at handling interaction. Plus we can have offline services in the browser. But the client will never be good at data fetching, code loading and SEO.
There’s not a single way to fix everything.
We need to handle download, parse, compile and execution time. Lots of performance issues are focused on download.
Don’t just make things fast – try not to do them at all. The fastest resource is the one you don’t have to download.
We need to introduce boundaries into the application where it makes sense; and this is where routing libraries come in. Facebook wrote their own (matchRoute). This is combined with a build system that creates bundles for each route. All up this means whole chunks of the app will only load on demand when the user needs them.
GraphQL also comes into play here as it’s a more flexible way to query data. You can design the query on the client, then execute it on the server. This reduces the number of round trips for data.
Facebook does use all these techniques!
Relay – library that works with React and GraphQL to give people performance gains without massive amounts of work.