(audience applauding) - [Divya] Cool, awesome, thank you so much for having me. It's a huge honor to be here.
I'm really excited.
I hope all of you are too.
The talks so far been great, the organizers have done a wonderful job, so let's give a round of applause for them. (audience applauding) As Tammy mentioned, my name is Divya Sasidharan, and I go by shortdiv online.
It's a moniker I got in high school, because I am short. And, I'm a Developer experience engineer at Netlify, where I work on tooling and content to help developers use our platform efficiently and effectively.
And if you're unfamiliar with Netlify, I will talk your ear off about it.
But essentially, it's a static site deployment platform. And it's a really cool tool.
So if you're not already using it, please do and I have stickers as well.
And, so, I keep iterating this, but I'm so excited to be here.
The last time I was in Amsterdam, I was 16, which contrary to popular belief was a long time ago. And I don't remember a lot from that trip except for a couple of things.
One being that the Dutch people are very tall. Because I think the average height of a Dutch person is like, six feet or something like that.
But it's still like not as tall as the tallest emperor penguin.
Did you know the tallest emperor penguin is six feet eight inches? It's a fun fact I bet you didn't think you were gonna learn that today (laughs).
And so the other thing I learned is that Dutch people really like condiments.
I don't know if this is true, is this true? Sort, of maybe? Kind of? Yeah, so I really like kind of understanding the local cuisine when I'm in a new country because I really like food.
And I noticed that there's more than one types of condiment that you can get.
I was at the McDonald's, that's probably the most American thing I'll say 'cause I went to McDonald's for dinner yesterday. And I was just flabbergasted by the options I had, in terms of the condiments I could get.
Chili mayo is my favorite.
It's really tasty.
And the reason why I'm flabbergasted is mainly because in America when you, well also side note, I really like these fries with sauce on them.
They're so cool.
Also, I still wanna know why they're called war fries. They are delicious, but, I have no idea why that's the thing.
But the only reason why I'm flabbergasted by the amount of condiments available is because in America, when you order fries, you only get two options. You get ketchup and mayo.
And if you're lucky, they'll give you Tapatio Sriracha, which I am someone who really likes mixing sauces, and people end up giving me weird looks when I ask for some oatmeal or Sriracha mayo. Sometimes my request is granted.
Oftentimes it is not, sadly.
And so I don't really always have a good dining experience. So I'm really happy that here, I don't have that experience. And so overall, the ability for the sauces or the condiments to be preavailable or premade for me is really nice. And on the web, I prefer that kind of an experience where a resource that I want is available for me without me having to ask for it.
So I don't have to wait.
It's just available magically for me.
And this is the idea of prefetching.
And in order to understand prefetching, before we dive into the details, it's worth understanding what the current state of things are.
And so Patrick talked a little bit earlier in depth about TCP, TLS, and all of the HTTP protocols. I will not bore you with the details, not that his talk was boring.
It was very interesting.
(audience laughs with Divya) But just a quick overview in an animated fashion of how the protocol works in case you forgot already. Essentially we're going to look at a very simple example, which is trying to move and navigate to a new page, so this is where you type in an address in the address bar.
Essentially the client doesn't yet know where that domain lives.
And so it makes a DNS request to the server, and the server essentially is like, "Here you go." "Here is the IP address associated with that domain." And so once the client knows that, it is able to do that TCP TLS dance.
Again, I won't go into details about that, because Patrick already talked about this.
But essentially, that allows the client and the browser to establish a connection with one another, which is how they exchange resources like HTML, style sheets, and so on.
And so essentially, what happens is that this TCP, TLS, and DNS request resolve takes a bit of time.
Oftentimes, it takes a couple of milliseconds if you're on a fast connection, but it can go from anywhere from that to a couple thousand milliseconds if you're on a slower connection, because again, it depends on how far you are from the server and the bandwidth that you have. The other thing that could take time also is getting the requests in order to render the page. So initially, whenever you know, you establish the connection between the client and the server, it needs to get the pages in order to render it to the screen. And so the first request is the index.HTML page. And so the browser gets that, it parses that information. And so it realizes it needs additional resources, like style sheets, and script tags, and so on. And so it gets those from the server.
It essentially makes those requests separately and gets them, which is really nice.
This animation needs to load.
And so that also takes a bit of time.
Not only does it have to establish that connection, it also has to get those resources.
And of course, with HTTP two, which is what Patrick talked about, again, not going to reiterate, there's a lot of ways in which you can make this faster, but it still takes a bit of time in order for the user to see the things that are rendered to a screen.
And so the nice thing about doing it faster is to do it via prefetching.
So when you prefetch, essentially you are anticipating where a user is going to go, and prefetching or getting resources that they will need in the next page that they navigate to.
And so in a sense, it's a big step towards a faster and less visible internet.
And to some extent, browsers already by default have ways in order to do this.
So TCP preconnect is one way in which a browser already does things under the hood to make navigation really fast. But the stuff that I'm talking about is specifically the techniques that us developers will be using to make websites faster from a user experience standpoint. Specifically, are we talking about DNS prefetch, Link prefetch and prerendering, and these are the three core concepts when you're talking about prefetching in general. DNS prefetch is the most basic form of prefetching. Essentially, all you're doing is you're doing the DNS resolve ahead of time. And DNS requests in general are really small in terms of bandwidth, but the latency is pretty high. Especially if you're on mobile, and you don't want mobile users to have to wait a really long time.
And so with prefetch, especially DNS prefetch, you can reduce the latency significantly.
And so users almost see a one second difference in terms of load.
And so that is pretty significant.
And so DNS prefetch is a really useful way for you to prefetch or do that preresolve ahead of time. The next thing that is in addition to DNS prefetch is link prefetch.
It's one step ahead of DNS prefetch.
Because in DNS prefetch, you're just doing that preresolve.
And in link prefetch, you're going a little further in terms of getting the resource that the user will need when they navigate away to the next page.
And so that's pretty neat as well, because again, you save a lot of time from having to make an additional request, when the user finally does navigate away.
And the last is prerendering, which is obviously the most extreme form of prefetching, as the term might indicate, prerendering is essentially rendering before a user navigates.
So it renders the entire page.
Not only does it do that preresolve, and link resource fetching, it also renders it behind the scenes.
And so from a user perspective, it's really seamless because they almost see an instantaneous load. So it's really nice.
But one thing to note is that it's incredibly bandwidth intensive.
And so it's something to take into consideration whether or not it actually makes a performance difference to the user. And so those are the different means in which we can do prefetching.
Of course, you had (murmurs) mention this in his talk, at Velocity Con a couple of years ago, about the cost benefit analysis between using all of these things, it's not very much a science. So the numbers are just kind of, it's more range from low to high, but you can tell that if you use something like prerendering, the cost, as I mentioned is so high that if you get it wrong, the users basically doesn't win.
Because you're wasting a lot of bandwidth, you're wasting a lot of resources.
And that's not very good for the user.
It's a huge loss.
But on the other hand, if you prerendered correctly, the user sees that instantaneous load.
And so it's really seamless from a user perspective. But of course, again, it's something you might want to weigh in and take into consideration when figuring out whether to use DNS prefetch, link prefetch, or prerendering. And so, there are some ways in which prefetching is really obvious.
So oftentimes, we make decisions on how exactly to prefetch resources, they can take the form of hard coding prefetch that should not be what you should do, because again, making an assumption, but generally speaking, search engines for instance, have assumptions that they can make in terms of what to prefetch.
So for instance, whenever a user types in something into the search bar, you have requests that are like various links that are populated.
Generally, the first few links are prefetched, because those are the ones that users are likely to click on.
Not many users navigate to the next page, not many users navigate to the bottom of the page. And so that makes sense as resources to prefetch. The other thing that's useful to prefetch also is general user workflows.
So if you're working on an application that has a login flow, where a user has to go from a login to a dashboard and various other sub pages, you can almost be sure that they will go from the login to the dashboard.
And so when they're on the login page while they're logging in, you can prefetch all of the resources they'll need in the dashboard like sub menus, articles or any other things that they would need for that dashboard to load quickly. And so these are effective means in which you can make decisions on how exactly to prefetch that can be a bit obvious. But of course, oftentimes applications are not super obvious, or the situations in which we want to prefetch are not as clear cut.
And so the thing with prefetching so far, we can, again take user interaction into consideration. So far, we've made assumptions that users were going to click the top few links, or so on. But in a specific case, what we can do is we can use intersection observer, to see whatever links were in the viewport, and then grab those particular resources associated with that link and prefetch them appropriately. And so again, now we're taking the actual user actions into consideration. We're not assuming the top few links are the ones that the user will always click on, we're seeing whatever whatever the user is seeing within their viewport and prefetching them appropriately. The other way in which you can take user interaction into consideration is if you look at user hovers, so Instant Click does this.
It's a plugin that you can use in your applications. Essentially, what it does is that it takes any hover, it captures the hover event that happens.
And when a user hovers over a link, it just prefetches all of the resources associated with that link.
So again, it's a really nice way to take user interaction into account when deciding how exactly to prefetch your resources.
But of course, all of this is very speculative. Sure, what taking some interaction into consideration, we might be looking at some metrics in terms of what we think would actually enhance the user experience.
But again, we're still pretty much just guessing or estimating what exactly will improve the, as Tammy mentioned, the happiness metric. And so a better approach is to move towards something that's more predictive, where we're actually using data in order to make decisions as to how to predict and how to prefetch resources.
So a really good example of this is if for instance, you wanted to know whether to bring an umbrella when you were leaving the house.
Oftentimes you would look at the weather, or you would kind of guess like, "Yesterday it was raining, so today, it probably will rain." You can kind of sort of have a sense of probability that yesterday was cloudy, what's the chance of rain today? And so a better approach, instead of looking at just like one single data point is to look at multiple pieces of data points. So let's look at one week's worth of weather data. So within one week's worth of weather data, we have some sense of when it's rainy and when it's cloudy.
So let's assume that we wanted to know what the weather is like following a cloudy day. So following a cloudy day, it's cloudy, it's cloudy, it's cloudy.
And it's rainy that one time so I know three quarters of the time when it's cloudy, it will be cloudy, and one quarter of the time it will be rainy.
So it's sort of kind of a probability.
It's not a huge data set.
So again, we're working with a small percentage or probability.
But again, it's something that's a bit significant for you to make a decision on whether or not to bring an umbrella when you leave the house.
And of course, you can make the same prediction in terms of what they thought what kind of weather will follow a rainy day.
So in this particular data set, only one data point follows a rainy day.
So it's cloudy after a rainy day.
So you assume 100% of the time, it is cloudy after a rainy day.
Again, the data set is really small, but we can make that assumption because we're dealing with a very specific use case. And so with this, we can actually build a transition matrix, which essentially shows you correlation between two things. So I know that given an overcast day today, 75% of the time, it's going to be an overcast day tomorrow. And 25% of the day, it's going to be a rainy day tomorrow. And similarly, if it is rainy today, I know it's going to be overcast tomorrow 100% of the time, because again, we're dealing with that limited data set. And so we can also, if for those of you who are visual, we can visualize this if a table is not clear enough, you can kind of see the arrows pointing to one thing or the other or pointing to itself. And that's gives you a sense of the probability of what things will happen following a specific event. So one past event will determine the current event or the future event.
And so we can take that specific idea of probability and adapt it to the situation of the web. So in specifically, we will look at routes because that's a very simple way of looking at things. So essentially, what we can do is we can see where exactly a user came from, and where a user currently is.
So if a user currently is at menu, we can see that they came from the about page. I'm gonna reorder them a little bit so they are a bit clearer and you can kind of see the correlation between certain things.
But I see that from the about page we have four pieces of data.
It's a little hidden, but it's in the behavior site content, all pages, and there's a little tab called navigation summary. It's really useful, because it shows you a sense of where the user came from based on where they are now.
I say it's useful 'cause the raw data is useful. This dashboard actually tells you nothing.
Basically, because it makes you believe that there's more data points when there actually isn't. So from the dashboard, it shows you that there's a previous page path and a next page path, which personally when I see it, it assumes that I have three pieces of data like the current page, the next page and the previous page. But that is not the case, so do not think that, which is what I thought. And so a better approach, instead of using this dashboard, which is pretty garbage, is to grab the raw data. So I'm just going to fetch it from Google Analytics using the API and show you the raw data because I think that's a bit more descriptive than looking at the dashboard.
No offense to anyone who works on Google Analytics. This is not a roast.
I feel like this is a constant (laughs) And so within the data itself, we have various pieces. It's essentially array of all the current pages, next pages and all those interactions that happen. So we'll look at one piece, and within one object in the array, there's a dimensions and a metric. So the dimensions is two pieces of data, specifically the previous page path and the current page path.
So here, it's not very useful because it's saying the previous page and the current page are the same page.
And the metrics is the second piece of data, which is useful, because it shows you the page views, so all of the people who moved from the previous page to the current page and exits, which means all the people who went from previous page to not going to any, basically exiting the entire application altogether.
So that's pretty useful as well, and we'll be using that data as we move through this demo. The other thing that's really useful is that the data shows you multiple pieces of metrics. So we looked at one specific piece of the array, which was just one dimension, but we can look at multiple dimensions.
So essentially, we have a previous page path that matches. So in these particular instances, both the previous page path is the route of the homepage, and the next page path of the current page part. It's a bit confusing but current page and previous page or yes, and so the current page path is different, because in the first instance, the user didn't navigate anywhere.
They essentially stayed on the homepage.
And in the second instance, they did navigate to the post page.
And so we see two varying interactions or two varying actions that happened.
And so that's really useful.
And we can use all of this information to predict where a user will go given the route page. And so what we'll do is we will take this raw data that I just pulled from Google Analytics, and we're going to try to aggregate everything so we can see everything in a bit clearer of a manner. And so I won't go through the specifics of how I clean data because this is a performance talk, not a data cleaning talk.
But essentially, what's happening here is that I pulled all of the data.
I aggregated it, and it's showing me based on the current page, so this particular object is the current page which is route and all the possible next pages so there's posts there's about and there's contact. Those are all the people that users have navigated to given the homepage.
And so this is really useful because I can see all the page views as well. And the particularly useful thing is that I can aggregate all of that.
So I know that given the route page four people... I tested my own site, which is why the numbers are very low. Four people, four of me, went to the post page. One went to the about page and one went to contact. And so the total next page views, the total actions of going from one page to the next is six, 'cause we're doing four plus one plus one.
Basic arithmetic .
The other piece of data that's pretty significant is trying to find the total number of actions that possibly happened. And so I'm highlighting exits and next page views. So next page views is all the users that navigate it from one page to the next within the application and exits is anyone who left the application altogether. And because I tested my own site, I didn't exit my own application.
And so six plus zero is zero.
So there are six total possible actions that happened in terms of navigation either away from the application or to another route in the application.
And so with that, we can actually have some probability. So I know that four users went to posts given the route. Six is the total number of actions that are potentially happening going from route and so four over six, which is two thirds, which is 0.666...
is the statistic or the percentage certainty that I have that a user will go to route, go to posts from route.
And similarly I can do the same for the about and the contact page.
One over six of users will go, or one sixth, whatever that percentage is, will go to the about page given the route page. And so again, this is basic probability.
I'm just guessing based on my data set.
And I can tell the certainty of how I know a user will go to a certain page given that. And so with this, I can actually automate a lot of this because I have the data from Google Analytics, I pulled it, I analysed it, and I have certainty metrics. And so I can put that into my build itself. So that it gives me those prefetch links that I so need. And so I call this predictive build automation, because it predicts stuff for me.
And so as I mentioned, I'm using Google Analytics, which is the raw data is nice, the dashboard. And I'm going to using 11ty for my actual static site generator, and I use 11ty. I usually use view and I love view.
But I'm using 11ty because I think it's a really performant static site generator. Also, Zach is awesome.
He's not here.
He spoke last year.
And 11ty he is really cool.
So I believe that everyone will understand what I'm saying here.
And Netlify is what I'm using for the actual build itself, so Netlify will run the build and like kind of push it to the CDN.
So in order to talk to Google Analytics, here is where I tell you how I got data from Google Analytics.
Essentially, I'm liasing with the API, and that requires doing an oath or OATH.
So I do some OATH stuff, which involves creating a client. And then I, of course, pass in some query parameters, which again, this is very Google Analytics specific. But what I'm doing here essentially, is grabbing data from 30 days ago, given yesterday being the last day.
And then I can do some authorization stuff, and I can get some data.
So from line 38, to line 44.
All of that is grabbing the data from Google Analytics. It's not actually doing any data munging whatsoever. But in line 51 is when I'm actually doing the data processing.
Again, I won't go over the algorithm for that, it's pretty naive.
I will share that with you if you want to look at the code and criticize it.
But essentially, what it does is that I aggregate a lot of the things and I'm giving myself an aggregate of all the pages available.
And so that is a really nice sense of I have all of my certainties, and so on.
And so this is what the data looks like after I have gone through or run it through my data munging stuff. So essentially, I have an array of a bunch of objects. Specifically, I have the page path, which is the current page, and then the next page path, which is where a user is going to go next.
And what is that certainty of the user going to that page. So the algorithm that I'm using is specifically taking the highest certainty of all pages. So there's multiple different actions that can happen, but I'm only looking at something that's statistically significant, so the highest out of the set.
So in this particular case, I know that a user given about will go to the contact page 50% of the time.
And so that's significant.
And that's something that I want to focus on. And so with 11ty, with the data and everything, I can put that into a 11ty template.
But essentially 11ty uses nunchucks, which is a Mozilla style templating engine. And so that's why you see the cookie templating stuff that's happening. What I'm doing is I'm grabbing prefetch links, which is the data file in which I did a lot of my data processing.
I'm iterating through all of those entries. I'm trying to match it with the current path that I'm on. So what is the current URL? Is it in the array? if it is, I'm running it through a filter to grab the next page that I need in order to prefetch it. So this is the code that you see in terms of how exactly I'm creating my prefetch itself. And so in that filter, just to show you a sense Of how 11ty works.
This is the 11ty config.
And I'm creating a filter that I'm running my entry through. And I can grab the next page certainty, which is just the percentage at which I think the user will go to the next page.
And I'm creating a threshold, which is essentially at what point do I care? At what point is this significant enough for me to care? And so my threshold is point five, which is 50%, for now. And then given that the next page certainty is more than threshold, return that page itself. And so with this, I'm making sure that any time a page, or any time there's certainty that a page is 50% or more going to have a user move from one to another, I will return that and have that prefetch available.
So this is where I hope the demo works.
Essentially, I had some like...
yes, this is useful.
I'm going to be looking at the network tab, and I know that a user goes from menu to contact. So here is the menu page.
And here's the Network tab.
And I'm going to go to the contact page.
And you can see automatically that prefetch cache. Can everyone see that? Cool, yeah.
So essentially, what's happening is that I automatically, I'm using the data.
I'm checking to see that there is a certainty, and I'm adding that prefetch link in the build. And so when a user goes from menu to contact, that gets prefetched. But if they go backwards, it doesn't get prefetched, necause you don't see that.
If a user goes from the about page, that doesn't get prefetched, but they do see that prefetch when they go to the contact page, so again, I'm using that data in my build itself.
And so it's really nice.
Also, it's really punny.
So I use the static site generator 11ty specifically for doing a lot of this, and the code that I wrote and a lot of the inspiration comes from GuessJS, which is written by the Chrome team, and they built this as a way of creating prefetch links predictively.
Generally, GuessJS is built for webpack.
So if you are using a static single page application, like view or react, there's often times where you would need your routes to be parsed, 'cause you're using webpack, and various builds. So GuessJS has a parser, and a webpack config that allows you to do a lot of the stuff that I talked about pretty easily without you having to write all the parsing logic on your own. So I highly recommend it.
It's a really great tool, and they're working on it. So I'm going to return to the idea that I mentioned earlier, which is so far, our algorithm is pretty naive in the sense that we're only looking at one piece of data, which is where did the user come from immediately following the current page? And so we might want to move towards something that's a bit more intelligent, intelligent being that we have more data points, and we can make accurate predictions as to where a user will go because a prediction is better when it's accurate.
And so coming back to the weather analogy, we've only looked at one week so far, but maybe we can go a bit further in advance. But before that, we'll be able to see all of the months.
So this is weather data, random weather data that I created, that shows you just within a month what the weather is like.
And so what I want to do is I want to see based on in the past, I looked at one day ahead, but I want to look at two days ahead.
So this is one day ahead.
I know that it's going to be cloudy, it's gonna be cloudy or whatever.
But here I'm going to look at two days ahead. So given two days of cloud, it's gonna to be cloudy, two days of cloud, it's gonna to be rainy.
You get the idea.
So, three sixths of the time, 50% of the time. A two day, cloudy day will lead to a cloudy day and 50% of the time it will lead to a rainy day. I should have written that down doing stats on the stage is a bit stressful.
Simple stats but, we can even go further with that, in that we can look three days ahead.
So instead of two days, we go three days.
So three days of cloudy days leads to almost 100% rainy days.
So that's way better than what we had before. Because in the past, if we looked at two cloudy days, we had a 50% certainty, but now I almost 100% with certainty know that it will rain. And so that's useful if I wanted to, I don't know, carry an umbrella.
That's pretty heavy.
And so with Google Analytics, you can't do a lot of this. But you can actually do cookie based tracking. Hold your criticism, please.
(Divya laughs with audience) But essentially, what you can do is that you can grab the cookies and see where exactly a user is going to go.
And so this is a thought experiment, by the way. And so essentially, what we're doing is instead of going one step deep, we're going two steps deep. And so now I know based on where a user has gone before, where the next few pages will be.
So given the route page, a user goes to about and then the menu.
And so 50% of the time, I know that a user goes from the homepage, the menu eventually.
And so I can automatically tell that that is a general flow that they're going to go there. And I can sort of prefetch the menu ahead of time, so that users don't have to load and wait for that load to happen.
Similarly, I can also look at random user experience weirdness that happens. So here I see that users are going from contact to menu back to contact.
And similarly from the route file or the homepage to menu to the homepage.
So you see a lot of cyclic actions that happen. And so not only does looking at data allow you to prefetch and get resources a user needs, it also gives you a sense of how exactly a user is using your application.
And perhaps it might give you a sense of how to build your application better or provide better information as to what a user is looking for.
And so that's pretty significant, especially, because performance isn't just about making speed, making websites fast.
It's also about making sure your users are getting exactly what they need when they need it. And so now you can criticize me, because of course, we have this thing called GDPR. We're in the EU.
So cookie based tracking isn't great.
California actually also just released their version of GDPR.
I think it was like GDPR for US or whatever. But essentially, it's just data protection for the user. So it's for very privacy minded folks, which I think is really great.
And so the cookie based idea is a really interesting thought experiment, but of course, it's useful to take this into consideration. So perhaps, instead of doing cookie based tracking, we can do something that's a bit more real time. So previously, we did things at build time. So anytime the build runs, it goes through the analytics, it checks to see that there are some kind of user movement from one page to another, it checks certainty, and that was a build time, so it's not checking or it's not updating based on user actions in any way whatsoever. And so we can move towards perhaps a more real time, maybe we won't be able to get many pieces of data as to where a user went two steps ahead, but perhaps as a user is navigating through, we can update the page and prefetch things in real time. And so for this, instead of doing things at build time, we're going to be using serverless functions. Again, hold your criticism, I know there's cold start issues.
But again, it's a thought experiment.
I'm just gonna to hide behind that.
We have a module that exports with an async function and some axios, whatever calls.
But we're going to move it to an export start handler and that's generally how you would do serverless. If you use AWS lambda Netlify functions and so on, you essentially export a handler, and then you have a return call back in which you pass the data out.
So that is essentially all that you're doing. And so this is...
what that looks like.
So this is my serverless function.
And it's giving me just a string of JSON.
And so I have that in real time, and I can use that data however I want, which is really nice.
And so that allows me the access to the threshold. I mentioned the idea of the threshold a little earlier. It's a bit hard coded in that I put it at point five, so at 50%, always prefetch.
But we can make decisions as to how to prefetch depending on the user's bandwidth.
So when we run things in real time, we have access to whether or not a user is on a low end device, a high end device, what kind of bandwidth they're on, whether they're on data saver mode or not.
And so for instance, if someone's on a slow 2G network, we might want to increase the threshold at which we will prefetch anything.
And in terms of the recommended approach for prefetching, we might want to use maybe DNS prefetch and link prefetch at most, because again, we know that bandwidth is really expensive when you're on a slow 2G network.
Whereas if someone's on a 4G network or a 5G network, anything above that, we might want to reduce the threshold because we can actually do a lot more with user experience, because we have a lot more bandwidth to work with.
And so we can do things like link prefetch, or even go one step further and prerender if we feel like it, but again, you might want to take that into consideration as to whether or not that perceived performance improves in any way. And of course, with data saver mode, that's something you can also check if someone's in data saver mode, never prefetch because I use data saver mode and whenever, like I would prefer if nobody prefetched things because it eats bandwidth.
Like Instagram eats my bandwidth.
And so again, it's worth looking at the this table that was created by (murmurs) in his talk.
It's really useful.
Just because it gives you a sense of how exactly to make decisions, when you figure out whether to use DNS prefetch, link prefetch, prerendering, because you kind of know, generally speaking, which is better, but it's always nice to kind of have a sense of, based on data, whether or not it makes a difference.
To close, it is useful for you to know your users and understand whether or not to prefetch things.
So for example, if you are here in the Netherlands and you are running an open source website, you might want to load all the condiments available. But if you are an American site, you might want to only load two things because Americans pretty basic, they only want ketchup and mayo. (laughs) So hopefully that gives you a sense of how to prefetch. Thank you so much for your attention.
The slides are available, and so is the code. (audience applauds) - [Tammy] That was so great.
We're still going to do it.
We'll do one or two questions before the lunch break, just because we had so many questions come in. - [Divya] Yeah.
- [Tammy] We absolutely can't do them all.
For anybody who has to leave right now, just know that we are coming back after the lunch break at 13:30, That's 1:30 for the North Americans.
(laughs) Myself included.
Okay, so we'll just ask the Netlify related questions. Somebody wanted to know if Netlify analytics is getting an API so they can do predictive prefetching and projects hosted on Netlify? - Yeah, that's a really good question.
So we've talked about this a lot.
I think currently, there is a way for you to access the API through the network tab.
So there are lots of people who've created like, create your own, I think Raymond Camden and Brian Rinaldi maybe also created essentially a way for you to do a fetch to Netlify's analytics. Because if you use analytics, you can see in the request itself, like the API requests, and so you can sort of do it on your own.
I wouldn't, I'm technically not supposed to say that. - (laughs) Don't tell anybody.
- I mean, you could if you wanted to because that information is available.
I'm not 100% sure if we are going to release the API so that people can use it.
Because I think in general, you want it to build the dashboard to be as rich as possible for that. I think I've been pushing for potentially releasing that API, because I think it's useful for people who use analytics to be able to have access to the dashboard and to the data raw as well. Because I specifically, when I used my examples, I was like, "I need that data, but I can't use it." So yeah, I think it's something that is potentially on the roadmap, but I don't know what the priority is in terms of whether or not that will...
- You don't get to make those decisions.
So this is kind of a more high level question. Somebody, it's kind of a bit more of a statement than a question.
No, it ends with a question.
Prefetch sounds more like a workaround than a solution. The speculating on a user's actions can waste resources, CPU cycles, network bandwidth.
You touched on that near the end of your talk. So the question, I guess is, why not focus on fixing the web instead? Just gonna lob that over to you.
- Yeah, that's a really good question.
I don't work at Google so I don't know.
Prefetching is like, so essentially my talk was, just like the ability for you to do these things doesn't mean that you should do them.
Just because I think it's nice to be able to do it, but I think there's so many other performance optimizations that you can make in order for your website to be fast. And of course, like, we can talk at length about how we want the web to be better.
But like, I don't know how much effect that has, like you can shit post on Twitter, I guess. And the Chrome team will read it.
I did that at CDS and yeah, got a bunch of replies. - But I agree with you.
I mean, basically, you could say that it's a band aid solution.
But really what it is, it's a tool, it's a solution for the web, as it is, not the web as we would all really love it. - Right. Exactly.
I mean, if the web was how it wanted to be, you wouldn't have to do this, right? - Yeah, so someday down the road.
We make everything perfect.
and then we don't have to prove that anymore. Do we have time for anymore questions? Okay, awesome.
So one question was, do you have any experience using the network information API to reduce the the risk of bandwidth consumption? - I do not personally.
Yeah, I can't answer that question 'cause I don't have, a lot of experience with that. - Okay, here's one.
Wondering if prefetching has potential for kind of biassing the audience in the sense that you're improving the user experience for people who follow the kind of the main journey, but possibly decreasing the user experience for people who don't? - Yeah, I think that's a general fear with whenever you run things through any like AI or intelligent algorithm of any form, because there's always bias involved, where the majority wins.
Unfortunately, I don't know if that's like a good answer to it.
Because, in a sense, like, why would you want to prefetch for users that don't go to a specific...
Like yeah, I would encourage everybody - Just prefetch everything all the time.
- Yeah, 'cause you need to be able to prioritize like what to prefetch and so the general flow would be to do the majority. I mean, again, I'm going to go to this cookie thing, just I know, it's like terrible.
But if you had access to some sense of, like, let's assume that a user has logged in, and there is like session data associated, you might have some sense of like, basic, a basic understanding of what the user cares about, like, perhaps not anything that ties them specifically or whatever.
But that could give you a sense of making better predictions rather than just off the like, doing it based on majority.
But yeah, I think it's, I feel like that's most of how the world works, where it's just like optimized for like, I'm short. So everything is just very high.
And I have to like stand on a ladder, and I learned to do that.
(both laugh) That's how the world works.
I'm really sorry if you are...
- (laughs) You don't have control over that either. - I know.
I mean, like, I would like to reach things, and I would like things to be shorter, but they're not. - Somebody is asking if there is an ML service, I'm afraid I don't know what that means, what ML means. - Oh, machine learning.
- Machine learning, okay yeah.
Where you can put your data in and add to your website for prefetching your pages.
- Yeah, so like, technically GuessJS does a bunch of like ML stuff.
So if you use their plugin, I'm pretty sure their ML is pretty naive 'cause I looked at the code and I was like, pretty sure the stuff I'm doing is what they're doing too.
So off the top of my head, I don't know any open source tools that are doing that. I read a bunch of posts, like I think Instart Logic is one company that they talked a lot about intelligent, fetching, and they've written stuff for like, they've written an ML model, essentially that they can trade using data.
- (coughs) Sorry.
- I know ML is just a terrible subject.
(both laugh) - It's making me cough.
- But yeah, I don't know of any open source tool. I'm sure you could try using TensorFlow and like Markov chains and TensorFlow, but you don't really need it.
I don't know.
- All right.
Well, on that note, and before I get started really coughing, thank you very much, Divya for being here. I think you're gonna have a lot more questions during the break.
(Tammy coughs) And yeah, we're back at 1:30