Perceived Performance

Introduction to Online Shopping and Performance

Luke Denton starts by engaging the audience with questions about online shopping experiences and the importance of performance. He introduces the use of Lighthouse for performance assessment and questions the true meaning of performance in web development.

Challenges of Measuring Performance

Denton discusses the challenges in accurately measuring site performance, highlighting scenarios where metrics may indicate good performance, but user experience feels slow. He emphasizes the distinction between actual performance and user perception.

Focus on Perceived Performance

The talk shifts to focus on perceived performance, exploring its subjective nature and how it influences user experience. Denton introduces himself and sets the stage for discussing strategies to enhance perceived performance on ecommerce websites.

Demonstration of Performance Enhancement Techniques

Denton announces a demo to showcase various performance enhancement techniques using a mock ecommerce website. He clarifies that the enhancements are made using Vanilla JS to demonstrate their applicability across different stacks.

Pattern 1: Addressing Cumulative Layout Shift

The first pattern discussed is addressing cumulative layout shift, also known as 'Layout Jank'. Denton demonstrates how to manage layout shifts during page transitions, improving user experience.

Pattern 2: Rethinking Loading Indicators

Denton introduces the second pattern, focusing on the strategic use of loading indicators. He explores how reducing or altering loading indicators can impact the perception of website speed and responsiveness.

Pattern 3: Utilizing Local Data Cache

The third pattern discussed is the use of local data cache to reduce loading times. Denton demonstrates how caching previously loaded data can enhance perceived performance, especially in repeat visits to a page.

Pattern 4: Implementing Prefetching Strategies

Denton talks about prefetching as a strategy to improve user experience by downloading resources before they are requested. He explores different approaches and considerations for effective prefetching.

Pattern 5: Optimizing Image Display

The fifth pattern involves optimizing image display by reusing images from previous views. This technique helps create an instantaneous feel by leveraging browser caching and minimizing resolution mismatch.

Pattern 6: Adopting Optimistic UI

Denton introduces the concept of 'Optimistic UI', which involves updating the UI in anticipation of successful server responses. He demonstrates how this approach can enhance user experience by reducing wait times.

Summary and Key Takeaways

In conclusion, Denton recaps the patterns discussed and emphasizes the importance of focusing on perceived performance. He encourages developers to consider user experience beyond just technical metrics for website optimization.

Thank you very much, Mark.

let's start off with a quick show of hands.

How many people shop online?

Easy question.

Alright.

keep your hand up or three hand up again if you appreciate a performant online shopping experience.

Yep.

And finally, keep your hand up still if when you go to your online store, you pop open DevTools, jump to the Lighthouse tab, and run an audit to determine if you're going to have a great experience.

Okay, some people...

Okay, I thought that would be a good way to get everyone's hands back down again.

So Lighthouse is what we would typically use to check how performant our website is.

But what is performance?

Is it all green in Lighthouse?

How do you achieve all green?

Is it time on site?

Is it website conversions?

Performance is going to mean different things to different projects to different people.

What we do know though, is that performance can be measured.

We can set up our performance budgets.

We can run Lighthouse as part of our build pipeline.

We can optimize, code split, lazy load, minify our code.

We can, track all of that through tools like Sentry or Google Search Console, so we can get a very good idea of when our site is performant.

Now, can we get into a situation where all of our metrics indicate that our site is performant, but when people actually use it, it feels slow?

we could have a statically built website, and being statically built would go a long way to acing that lighthouse test.

But when a user first visits a website, a lot of that performance improvements are less relevant.

So they're only really for cold visits.

For example, in an SPA, whether a site feels fast or not is usually dependent on the network connection that the user has available to them and the load that's on the server.

So obviously if the connection is, slower, then API requests are going to take a bit of time to resolve.

Or if the server is under high stress, it's going to take a little bit longer to process those, requests and get a response.

So today's talk isn't about performance, rather it is about perceived performance.

So what is perceived performance?

it is a feeling.

It is a subjective feeling.

It's having our users feel like the site is reacting instantly to their input.

Perception is reality, after all.

If it feels fast, then it is fast.

a quick little reintroduction.

My name is Luke Denton.

I am the Principal Finance Software Engineer at Aligent.

this is my first big conference talk, so if you can tell, the butterflies are going crazy.

And that's also why I didn't change this slide, because I just wanted to stick to what I'd practiced.

At Aligent, we build ecommerce websites for clients that have customers all over the world.

Performance at scale is something that we're really passionate about.

A lot of our sites regularly have tens of thousands of concurrent users, both reading data, but also writing data, so obviously adding to cart.

So think of times like Cyber Monday or Click Frenzy, which really puts a strain and tests the infrastructure and performance.

Today I'm going to be talking about some patterns that we've been exploring at Aligent on how to improve the perceived performance of websites regardless of what's maybe happening on the server.

Now, we've got a demo app that we're going to be running through today that we're going to be enhancing as we go.

we're not, I'm not going to try and do it live because presentations are live and my, first presentation, let's not test that out.

I have recorded everything.

It was all recorded locally on my machine, so that included the server.

I've rendered it with React, but all the enhancements are going to be done using Vanilla JS.

And that was because I don't want to rely on any frameworks.

I want to show that a lot of these perception improvements can be made with Vanilla JS in any stack that you have.

Now, because I said that everything is local on my machine, I did add an intentional delay to the server, because otherwise the requests on my local machine would be instant, and that would not really, demonstrate these patterns very well.

So there is a delay that you'll see.

And, okay, we'll jump into the demo of the demo app, but first I really wanted to make this talk as interactive as possible.

So with that in mind, the demo app that we're actually running through today that I'm showing recordings of is actually available to everyone right now.

So you can follow along and not only see these patterns in use up on the screen, but feel them firsthand.

So for, people that are watching the recording back, you'll use perceivedperformance.lukedenton.dev, but everyone here and now watching live, just use wdc.lukedenton.dev, less characters to type in.

All right.

So with that, let's jump into a quick demo of our beautiful ecommerce website.

So our ecommerce website is called Timeless.

It sells wall clocks.

It's pretty basic.

We can view the category page, jump into a product page, and add to cart.

There is a checkout button there.

Don't bother clicking that, that's just for show.

All right, so there's six patterns we're going to go through today.

The first one maybe isn't really a pattern, but more of a consideration, but it is where we're going to start.

And that is Cumulative Layout Shift, which is affectionately known as Layout Jank.

It is when an element is painted to the browser and then as content comes in, it gets painted somewhere else and moves around and not in the fun way like an animation.

So we'll quickly jump back into our demo again and we'll focus on this transition into the product page and look at the footer that starts right up there underneath the loading indicator.

It's a probably, I'd say it's a fairly common experience we would see when we visit a lot of SPAs that the footer starts at the top and then gets pushed out of the way as content comes in.

It's, a jarring experience and people that are following along on the device can probably attest to.

some ways to fix this.

We could go with a skeleton loader and skeleton loaders are really great because they hold the real estate for incoming elements, both the vertical and horizontal space whilst also conveying to a user that data is loading.

But we can go something a little bit simpler, a little bit easier to implement.

Just use CSS to maybe push the footer to the bottom of the viewport.

So let's have a look at what that looks like.

So again, we'll click on the same product and notice that the footer actually starts at the bottom of the viewport this time.

And then disappears.

That's, I feel that's a much better experience for a user than having the big black footer in the middle of the screen and then moving off.

But we still have the footer there whilst data is loading and then we don't end up with the footer there.

So do we need the footer to be there to begin with?

Why don't we just get rid of it completely?

And by get rid of it I mean push it below the fold.

It's very likely that our page is going to be long enough that it's going to end up below the fold anyway.

So why don't we just go ahead and do that?

And this is what that would look like.

Click on the product, no footer, just the loading indicator and then the product.

There's a lot less moving going on over the place, a lot less distraction to the user as they load.

Now, we'll talk about some downsides.

the obvious one is short content.

What if your page doesn't, wouldn't otherwise actually push that footer down?

And the only real way to fix that is that once the content is finished loading, maybe just update the styling on the footer to bring it back into the viewport at the base of the viewport, which yes, is introducing layout jank again, but I would expect that to happen very little, that it probably doesn't matter too much.

All right, the first real pattern we're going to talk about is loading indicators, and more specifically, not using loading indicators.

using them sparingly anyway.

So here we go, jump back into the demo.

Let's focus on when we click on the product, we can see the loading indicator.

Pretty basic.

That's our baseline.

Now, made a slight change to the code.

The front end code, the server hasn't changed, the request time is still exactly the same.

Let's see if we can see what's different.

We click on the product and then we see the product.

Now, I did forget to mention earlier that when they're clicking, you might notice there's a purple circle that radiates out.

That is when I click.

So yeah, as I mentioned, the server response is exactly the same.

This feels faster, right?

Is that your perception as well?

Why is that?

this is exactly the situation we have with one of our clients.

They were comparing their website that we'd built and we'd built it with loading indicators that would display as a response to an action where a user has to wait to their competitor's website that didn't.

Their competitor's website would accept the click, send a request off.

When that request comes back, then they would update the UI.

Our requests weren't any slower and in some cases they were actually faster.

But to our client, it felt like their website was slower.

And the reason they were saying that is because they could see when they were being made to wait.

Now we would, you would think we'd run out and quickly implement what our competitor had done.

But we had a couple of issues with that approach.

And namely is, what if the network request is slow?

So a user would click on the link, nothing would happen.

So they would click it again and again, sending off multiple requests for the same data.

Using up bandwidth, but almost certainly resulting in a rage click event on your analytics.

So that's what I mean when I say that loading indicators can give the perception of a poor performing website, because users can see exactly when they're being made to wait.

Alright, can we combine the two?

Can we get the best of both worlds?

So we can handle when a user's on a slow connection, but also handle when a user's on a fast connection.

Let's check this one out.

in this demo, I have throttled my network connection.

The server code is the same.

But then in DevTools, I've throttled the connection so that when I click, we do wait.

And then after a second, the code has a callback that will run and say, okay, we're still waiting.

Let's update the UI to tell the user that we're actually working on their request.

And that's what the, pattern is that we see here.

Okey doke.

So next one I'll talk about as a bit of an extension of loading indicators, but not really, they're not exactly directly related, is the use of local data cache.

Now using local data cache is going to help us reduce the amount of loading indicators that we see.

So what is local data cache is essentially remembering what a user has already seen and then not showing them a loading indicator in that same context again.

So again, our baseline, so we click on our product, we see our loading indicator, we see the product data, then we go back.

We see a loading indicator again, and we click on the product, and we see another loading indicator.

We don't need all of those loading indicators.

We've already seen the category data.

We've already seen what the product looks like.

There's no need to make the user wait again.

Why don't we instead use that cache data to immediately render something to the screen, whilst in the background, sending off an API request to get updated data, which we then update our cache, and we can choose whether we push that to the UI as well or not.

This is the stale while revalidate pattern that people might be familiar with.

So in our day to day at Aligent, we actually use Apollo Client for our GraphQL requests, which makes it super easy for us to both make use of the cache and update the cache, and Apollo Client also pushes updates of cache to UI, so it's really easy to use.

But I said I didn't want to rely on any frameworks or libraries in this talk, so I've done something using just Vanilla JS and storing that in memory.

So we're going to have a look at what that looks like now.

Click the product, we see it loading initially, that's fine.

Back to the category page, no loading indicator.

Back to the product, it's there.

So what we don't see is that in the background, there is that API request.

So we're not going to be stuck with stale data.

And what we might also see up on screen, hopefully that's coming through, is there is a yellow border added around, whoops, yellow border added around the product card.

That's just for purposes of this demonstration to show that product is actually in cache.

It will help with our future patterns to understand that.

Now, again, I didn't, I said that the code isn't important, but I did want to show just how straightforward it was to set up this pattern.

This is the code to save to cache.

So that cache object there is nothing special other than it's a singleton.

It has two functions, one to accept some data and one to get the data.

The most important part there is the key, which says, okay, this is exactly what this data looks like for this product ID.

And then this is how we read that data.

Again, we're reusing that same key, so we know exactly which data we're pre filling on the page, and as I mentioned, I'm using React, so that's why we've got a useState there.

Callback function as well, it's very important in React for useState.

All right, so there's a couple of ways to handle our local data cache, and we can combine the two as well.

We can do what we've just d what I've just done there, which is in memory.

the downside of that though is that if a user navigates away from the website and then comes back, all of that in memory perceived performance data cache that's been built up is lost, they have to start again.

or if they happen to refresh the page for whatever reason, or if we have a multi page app as well, they would lose that every time they navigate.

Or we could persist.

And persistence is a really great idea because return visitors get that perceived performance immediately.

They don't have to build up their cache again.

So we have to consider how to persist.

So we have to pick a technology that affords us maybe enough space on disk to write to, because we're not dealing with compressed data.

This is the raw full data.

So we don't get that benefit.

Now the, two most obvious ones that might pop into mind would be local storage and session storage, and these would be great options.

But we do have to keep in mind that browsers will only allocate around five megabytes of storage to these locations for, through these APIs.

so we'd have to have a pretty good purging strategy, and at scale that may not work very well.

There's also indexed DB, which everyone loves to use.

And if we have a service worker, we could potentially even use a cache API.

allocated and allowed access to much more storage space.

And then to actually use them, we swap out our cache object with a read and write interface to that storage solution.

All right.

Now that we've got like a data cache, we can have a look at the next pattern, which is prefetching.

So prefetching is preventing a user from having to wait at all.

Give them the second visit feeling the first time.

So it's downloading resources before they're asked for so that when they are asked for, they're already there.

We can instantly show them to the user.

What can be prefetched?

literally anything can be prefetched.

But we do want to consider some things.

For example, we want to consider, ah, it's best just to stick to data and text.

So data can be compressed on the server, sent over the network as a small, g zipped, bite sized, piece of data, and then expanded on the, in the client side.

Similarly with JavaScript and CSS, that is text based.

We can gzip that, we can send that through fairly small.

and likely a lot of us are making use of code splitting, so it will certainly help, in that regard, especially if we know, or can anticipate the next route that a user's going to go to, we can prefetch that JavaScript or CSS and then that transition will be instantaneous.

Images and media, however, we probably don't want to prefetch.

They're usually much larger.

We don't really want to waste the user's bandwidth in case they don't happen to look at that image or that media.

But, if in our situation, they absolutely have to see that, media in order to continue, then, yeah, go right ahead and prefetch, because it'll save them time.

Okay, just as important as what to prefetch is, When to prefetch?

There's a few options.

the first of which, and probably the easiest to implement would be immediately.

So by immediately, as soon as you know the potential routes that a user could take, go ahead and prefetch them all so that they're all there ready to go.

In our example that we're running through, that would be as soon as we've downloaded all the products for a category page, where we know what the products that users could click on.

So go through and prefetch all them so they're all ready to go.

The next one would be network is idle.

So that would be once the network has finished downloading all the other requests, which are probably higher priority, we then go ahead and do our prefetching strategy, which is probably taking advantage of that immediate pattern as well.

This one is a little bit tricky to implement because when is the network idle?

Determining that is something you would have to come up with too far.

And the next one, and the one that I've actually gone with in this demo is based on the viewport.

So as you're scrolling down and products come into view, then we think about prefetching.

And the idea behind that is that a user can only realistically click on what they're looking at.

If they can't, if they haven't seen the product yet, then it's very unlikely they're going to click on it.

no, there's no need to prefetch it.

Okay, so in this next demo, I'm actually just going to scroll, down the page.

We'll see that the first row of items are in the cache because they have the yellow border.

The second and third rows aren't in the cache, but as we scroll down, you can see that they do get added.

There we go.

They get added to cache and they're in cache and then we can prove that clicking on one immediately shows the product data.

Okay, so let's talk about some potential downsides.

It is increased bandwidth.

We are downloading resources that the user may not actually see, so we do want to consider their situation.

We can try to mitigate that a little bit by tracking what is cached.

And if something's already cached for a product or for a route, don't go, don't try and prefetch it, because there's already something that will give them that perceived performance.

And we're all going to update that when they get to that route anyway.

The storage limitations that I mentioned in the browser earlier, and users that are on slow connections, or as I've learned this morning, their device is under pressure.

We might want to consider our prefetching strategy there.

for example, if they are on a slower connection, we might want to be a little bit more conservative with what we prefetch.

Or if they're on a fast connection, maybe we want to be a bit more optimistic.

How do we know what connection they're on?

we can use something called the Network Information API.

Now, this is an experimental API.

It is in a lot of the browsers.

But it's experimental, so don't use it in production code.

okay, don't use it in mission critical production code.

it's probably safe to use for enhancements that we're doing here.

the Network Information API gives us a property called EffectiveType.

The values on that are slow2g, 2G, 3G, 4G, I'm sure 5G will be added at some point, but it isn't there as of this moment.

It also gives us a type, and this type indicates whether the user is on a, likely, on a media connection or not.

So using a combination of these properties, we can determine what would be the best pre fetching strategy for this user.

There is also a save data property, and this will be true if the user has told their device that they are actively saving data, and in which case, as responsible developers, we would disable any pre fetching.

And I mentioned device under pressure, which is the compute pressure API that Kenneth alluded to earlier, which I didn't actually know about, but that's very cool.

And then strain on server.

So all these extra requests, could that put a strain on the server?

And yes, absolutely it could, especially at scale.

But at scale, I would hope that we've got a, cache serving, an app server, a caching layer in place so that subsequent requests for the same thing don't actually hit our app server.

They instead hit the cache server.

All right, now we've got prefetching.

Let's have a, we've got prefetching now.

So that transition between the category page and the product page, it's, pretty quick, however, that image is clearly coming in late.

We've chosen not to prefetch images and media because we don't want to use up our users bandwidth.

So how can we go about making it feel instant without actually pre fetching?

the browser's already seen the image for this product.

They saw it on the category page.

Why don't we just reuse that exact image, put it in place of where the main product image is going to be on the product page, let that sit as a placeholder, and then whilst the high resolution image gets loaded in and put over the top.

Implementation for this is super easy, you just make the same request and the browser does the rest for us.

Let's have a look at that.

So we click on our image and the entire page was instant.

Now I haven't, there's no cache warming happening here, this is a fresh request, when we click on a product.

We see an image immediately.

Hopefully it's coming through on the big screen.

But if we take a close look at the image, it starts out a little bit blurry, a little bit pixelated.

And that's because it's the reuse of the category page, which has been stretched up a little bit.

And then the high resolution image just gets put over the top.

So it's like you've got a camera and you're just focused in.

Downsides.

there's a couple I can think of, and the one that I just mentioned there, the resolution mismatch.

So what happens if the image on the category page is much smaller than the one we display on the product page?

Potentially that pixelation could be extreme.

I don't think that's too big of a deal because it would just appear like a progressively loaded JPEG.

There's also the potential issue where the image on the category page is different to what the product page will actually display.

So we then preload in an image that then gets replaced by something completely different, which would be quite confusing to a user.

But even if we didn't do that leveraging of the browser cache, I think that would be confusing.

So where in our control, probably try and reuse the same image.

Alright, so the experience is pretty instantaneous now, I think our users would be quite happy with that.

Next one, next pattern to talk about would be optimistic UI.

Optimistic being the operative word.

Assume everything will work and go according to plan on the server.

And if we do that, we can update our local state immediately to give the user the feeling of immediate reaction.

So we'll set our baseline.

Back at demo six, we're starting on the product page and we're clicking add to cart.

We can see that we're waiting for it to add to cart.

And then the mini cart pops open to confirm, okay, this has been added to cart.

And then the user, once they see that, they'll be like, okay, great.

I can go to checkout or I can go and do some further shopping.

if we are assuming that everything's going to work fine on the server, we can speed this interaction up.

We don't need the user to wait.

Let's give them that confidence straight away.

So here, the next demo, we have switched over to Samba 7.

We click add to cart.

It's immediately added.

We go back to the category page.

Let's add another clock.

All right, we've got lots of rooms to add clocks to.

There we go.

So in about roughly the same amount of time it took to add one item to cart, or it felt that it, that the user understood it took, we've just added three.

All right, downsides.

The obvious one, what happens if it doesn't actually add to cart?

What happens if there's not enough stock?

What happens if they didn't choose enough options?

What happens if there's just an internal server error?

This stuff would, could happen.

in these instances, we're going to have to capture that error request.

We're going to have to let the user know, okay, that product didn't actually get added to cart.

We're going to have to update our local cart to remove it again.

And then we may even want to provide the user some more options, like a link directly to that product so they can try and add it again.

All right, that's all the patterns.

So let's have a little bit of a...

look at where we came from and where we came to.

This is where we started out.

We click on a product, we see our layout jank, we wait for everything to come in, we click add to cart.

Fairly typical ecommerce experience.

This is where we ended up.

Click a product, it's there, we add to cart, it's done.

So this second recording is around two seconds, the first one was about four and a half seconds.

So you can see we've shaved that perception of time.

right down.

Okay, the key takeaway slide, the golden rules of perceived performance.

Reduce and remove layout jank.

Only show one loading indicator.

Prefetch, once, if the user has allowed it.

Leverage the browser cache.

And optimistically update the UI.

So in closing, I just want to reiterate, all of those improvements were done in browser.

That server delay was still there the entire time.

That didn't change.

There are many, there are also many other ways to influence the user's perception of speed.

I didn't touch on things like resource hints.

I didn't touch on ways to potentially measure these improvements.

And I didn't touch on third party scripts like instant click that would certainly help as well.

Now my...

My hope for this presentation was to give you a bit of inspiration and maybe change your focus a little bit away from raw metrics.

They're good, but they're not everything, and maybe focus a little bit on what the user is actually experiencing in the browser when they're using the website.

Now, these improvements aren't going to directly help with SEO, and that's not the point.

However, if Google is seeing that people are converting at higher numbers on your website, They're going to see it as a higher value website and high value websites get put at the top of search results.

And that's it for me.

Thank you very much.

How many people shop online?

Do you appreciate a performant shopping experience?

Do you open DevTools when you visit an online store,

jump to Lighthouse and run an audit to determine if you're going to have a great experience?

Performance - What is it?

  • Is it all green in Lighthouse?
  • How do you achieve all green?
  • Is it time on site?
  • Is it website conversions?
  • ...
  • Performance can be measured
  • Performant, but slow?

Perceived Performance - What is it?

  • It's a feeling, subjective
  • Feel like the site is instant
  • If it feels fast, it is fast

Introduction

Introduction

  • Luke Denton
  • Principal Frontend Software Engineer
  • Aligent
  • Perceived performance patterns

Demo App

  • Hosted locally
  • Rendered with React
  • Enhancements made with vanilla JS
  • Server has intentional delay
  • Demo of demo app
  • You can test the patterns right now!
  • wdc.lukedenton.dev
  • perceived performance. lukedenton.dev

screencast of the demo App. Luke narrates his use of it and what we see.

Perceived Performance Patterns

  • Cumulative Layout Shift (CLS)

Cumulative Layout Shift (CLS)

  • Layout jank
  • First painted one place, next painted another
  • Not an animation Demo 0

Cumulative Layout Shift (CLS)

  • Layout jank
  • First painted one place, next painted another
  • Not an animation Demo 0
  • Skeleton Loaders
  • Push footer to base of viewport Demo 1

Screencast of demo app. Luke narrates what we see.

Cumulative Layout Shift (CLS)

  • Layout jank
  • First painted one place, next painted another
    • Not an animation Demo 0
  • Skeleton Loaders
  • Push footer to base of viewport Demo 1
  • Push footer below the fold Demo 2
  • Downsides?

Perceived Performance Patterns

  • Cumulative Layout Shift (CLS)
  • Loading Indicators

Loading Indicators

  • Not using loading indicators
  • Well using them sparingly Demo 2

Demo 3

Luka narrates his use of the demo app.

Loading Indicators

Client Story

  • We used loading indicators
  • Competitor waited for request to resolve
  • Our requests weren't slower
  • Client felt they were slower
  • What if the network request is slow?
  • Click, click, CLICK!

Loading Indicators

  • Not using loading indicators
  • Well using them sparingly Demo 2
  • Show's exactly when a user is being made to wait
  • Best of both worlds? Demo 3

Screencast of demo app. Luke narrates its use.

Perceived Performance Patterns

  • Cumulative Layout Shift (CLS)
  • Loading Indicators
  • Local Data Cache

Local Data Cache

  • Remember what user has already seen (Demo 2)

Screencast of demo app. Luke narrates its use.

Local Data Cache

  • Use cached data while getting an update from server Demo 4
  • stale-while-revalidate
  • Apollo Client
  • Vanilla JS, in memory

Screencast of demo app. Luke narrates its use.

Local Data Cache

  • Implementation
  • Save to cache
    cache.addToCache({
      key: `product-${productId}`,
      value: JSON.stringify(res.product)
    })
  • Read from cache
    const [product, setProduct] = useState(() => {
      const cachedProduct = cache.getItem(`product-${productId}`)
      if (cachedProduct) return cachedProduct;
      return undefined;
    });

Local Data Cache

  • In memory or persisted
  • How to persist
    • localStorage
    • sessionStorage
    • IndexedDB
    • Cache API
  • Swap cache object for read/write to storage location

Perceived Performance Patterns

  • Cumulative Layout Shift (CLS)
  • Loading Indicators
  • Local Data Cache
  • Pre-fetching

Pre-fetching

  • Prevent users from waiting, at all
  • "second visit" feeling, the first time
  • Downloading resources before they're asked for
  • What can be pre-fetched?
    • Any resource can be pre-fetched, but
    • Best to stick to just data/text
      • Data requests
      • JS/CSS
    • Skip images/media, unless absolutely sure
    • When to pre-fetch?

Prefetching

When to pre-fetch

  • Immediately
  • Network is idle
  • Based on Viewport Demo 5

Screencast of demo app. Luke narrates its use.

Pre-fetching

  • Prevent users from waiting, at all
  • Any resource can be pre-fetched, but
  • Best to stick to just data/text
  • Skip images/media, unless absolutely sure
  • When to pre-fetch?
  • Downsides?
    • Increased bandwidth - Track what is cached
    • Storage limitations
    • Slow connections/Device under pressure

Pre-fetching

Slow Network

  • Network Information API
  • effectiveType
    • slow-2g
    • 2g
    • 3g
    • 4g
  • type
  • saveData
  • Compute Pressure API

Pre-fetching

  • Prevent users from waiting, at all
  • Any resource can be pre-fetched, but
  • Best to stick to just data/text
  • Skip images/media, unless absolutely sure Demo 5

Downsides?

  • Increased bandwidth
  • Track what is cached, don't pre-fetch cached
  • Storage limitations
  • Slow connections/Device under pressure
  • Strain on server

Perceived Performance Patterns

  • Cumulative Layout Shift (CLS)
  • Loading Indicators
  • Local Data Cache
  • Pre-fetching
  • Browser Cache

Leveraging Browser Cache

  • Image is coming in late
  • Can we make it feel instant without pre-fetching?
  • Re-use already seen image
  • Simply re-use the same request Demo 6

Screencast of demo app. Luke narrates its use.

Leveraging Browser Cache

  • Image is coming in late
  • Can we make it feel instant without pre-fetching?
  • Re-use already seen image
  • Simply re-use the same request Demo 6
  • Downsides?
    • Resolution mis-match
    • Different main image on product page

Perceived Performance Patterns

  • Cumulative Layout Shift (CLS)
  • Loading Indicators
  • Local Data Cache
  • Pre-fetching
  • Browser Cache
  • Optimistic UI

Optimistically Update UI

  • Assume everything will work
  • Immediately update local state Demo 6 -> 7

Screencast of demo app. Luke narrates its use.

Optimistically Update UI

  • Assume everything will work
  • Immediately update local state Demo 7
  • Downsides?
  • Action fails

Before and After

Screencast of demo app. Luke narrates its use.

Golden Rules of Perceived Performance

  • Reduce/remove layout jank
  • Only show 1 loading indicator
  • Pre-fetch, once, if the user has allowed it
  • Leverage browser cache
  • Optimistically update the UI

Closing

Closing

  • All done in browser
  • Many ways to influence user perception
  • Raw metrics aren't everything
  • SEO, sort of
  • Thank you