It is WWW (Is it though?)
Introduction
Armağan introduces herself and the topic of the talk, which explores the ethical ramifications of technology choices, particularly in web development, focusing on performance and accessibility.
Technology Choices and Ethical Ramifications
Armağan highlights the impact of technology choices on user experience, emphasizing that they can have real-world consequences. She references Eric Bailey's blog post, "Modern Health Frameworks, Performance, and Harm," as a key inspiration.
Web Performance is Variable
The talk delves into the multifaceted nature of web performance, encompassing both physical factors like internet speed and device capabilities, and psychological factors like user expectations and the perceived value of the content.
Web Performance is Not a Checklist
Armağan argues against relying solely on performance checklists and tools. She stresses the importance of empathy in understanding how users experience performance, advocating for tailored solutions over generic approaches.
The Impact of JavaScript on Web Performance
JavaScript's significant role in web development and its impact on performance take center stage. Armağan explains how JavaScript's CPU-intensive nature can lead to performance bottlenecks, especially on less powerful devices.
Understanding and Reducing JavaScript Bundles
The talk provides practical advice on analyzing and optimizing JavaScript bundles. Armağan discusses the prevalence of unused JavaScript and highlights the benefits of reducing bundle sizes for faster loading and a better user experience.
Code Splitting for Optimized Loading
Code splitting is presented as a key technique for improving web performance. Armağan explains how it works, its benefits in reducing initial load times, and provides practical examples of its implementation.
Metrics for Measuring Performance
The talk introduces important web performance metrics like First Contentful Paint (FCP) and Largest Contentful Paint (LCP), explaining how these metrics help measure and improve the user experience.
Rendering Strategies: Client-Side vs. Server-Side
Armağan compares and contrasts client-side rendering (CSR) and server-side rendering (SSR), highlighting the advantages of SSR for delivering content quickly and providing a more inclusive experience for users with varying device capabilities.
Hydration and Interactivity
The talk explores the concept of hydration in web development, explaining how it makes server-rendered content interactive. Armağan discusses the importance of optimizing hydration strategies and minimizing JavaScript execution for improved interactivity.
Leveraging Native Solutions
Armağan advocates for using native HTML and CSS solutions whenever possible instead of relying heavily on JavaScript frameworks. She emphasizes the performance and accessibility benefits of this approach.
The Popover API: A Case for Native Solutions
The new Popover API serves as a prime example of the power of native solutions. Armağan details its accessibility features and how it simplifies the implementation of popovers, tooltips, and similar UI elements.
NodeJS for Graceful Degradation
Armağan challenges the audience to consider NodeJS for ensuring graceful degradation in web applications. She emphasizes the importance of providing core functionality even when JavaScript is disabled or fails to load.
Progressive Enhancement and Graceful Degradation
The talk defines and differentiates between progressive enhancement and graceful degradation, two important approaches for building inclusive web experiences that cater to users with varying levels of technology access.
Practical Practices for Performance-Minded Development
Armağan concludes by offering practical advice for integrating performance considerations into the development workflow. This includes integrating performance budgets, code review practices, and regular testing and monitoring.
Hi everyone, I'm so pleased to be here, being at Web Directions conference for the first time as a speaker.
So thanks so much for coming, to my talk.
So before we start, I'd like to mention that you can access the slides via the link captions and the links, I referenced throughout my talk, from this QR code.
So this goes to your, GitHub repo, unsurprisingly, and the slides are also published on the GitHub pages.
So you can find the link in the, GitHub, repo description as well.
And I'll share the QR code after the presentation too.
Technology choices have ethical ramifications.
This code is from a blog post called Modern Health.
Frameworks, performance, and harm.
And it's written by my brilliant colleague, Eric Bailey, who is a Senior Accessibility Designer at GitHub.
The post mainly discusses how technology choices impact user experience and can even result in harm.
I still remember how eye opening I found when I read the blog post.
And I wanted to share with every single developer I know because it was making great points.
And after all, Eric's post inspired me to give this talk.
I encourage you to read it with Eric's own words and, his own emotions.
Which was very well reflected in his writing.
But today, I'd like to talk about one of the main takeaways I got from the post.
The choices we make every day as technology makers have a direct impact on someone's quality of life.
And today, I'd like to specifically focus on how our choices impact the performance of the web applications we are building today.
And how does that really reflect to our users?
Are we building applications, that can be used by anyone, however they encounter it?
Or do we create inaccessible applications and experiences because of our quote unquote not so intentional choices?
How worldwide are we building, how worldwide applications are we building today?
To start with, web performance is variable.
It depends on the internet speed, the bandwidth, latency, because it all determines how soon the user can download these resources to their devices.
It's also highly dependent on device capabilities, because after all, downloading these resources will need to be processed on their devices as well.
And the speed and the device capabilities varies from network to network, device to device, user to user.
And I'd like to think that these are the physical components of the web performance.
On the other hand, there are psychological factors that impact how users perceive the performance.
It's more like how fast it feels to them.
And it depends on quite a lot of things too.
For example, what are your users' expectations?
Do they just wanted to quickly grab a phone number of a restaurant or do they wanted to order food online?
It's reasonable to expect a longer waiting time when you are ordering your food than just wanting to access a phone number, of a restaurant.
And same goes the other way around.
What value are we providing?
If we are rendering a data intensive page, and if the users know this, they will be more willing to wait for it.
It's also important to recognize that the content we are delivering impacts how users perceive the performance.
For example, say our website is a platform to order party supplies, and most of the time it is safe to assume that people are in a happy place and they're just shopping.
But if you're developing a health platform that people check their medical results in, I think it's important to empathize that they're probably in an anxious state.
And when we are anxious, the waiting times feels slower.
So when thinking about all these components that impact web performance and how our users perceive the waiting times, I can't help but think that web performance is not a checklist, and we can't solve everything with just checklists.
There are great tools out there to help us understand the web, the performance of the applications.
They measure it by simulating, internet speed and devices.
They provide great resources to improve, even specific metrics, but how they can really take users mental state into account, or the content, the value we are delivering.
So for delivering truly inclusive applications, it is crucial to develop empathy for our users.
And I appreciate that building empathy for our users is not a straightforward development advice as much as we should write tests.
But I think, coming back to how we started, being intentional about our technology choices, solving the problems in a way that is specific to our users and our content, rather than applying a generic solution we find on the internet is all a part of, building empathy.
Yeah.
Where do we start?
Web performance is already an overwhelming, complex topic and now we are talking about, building empathy, looking at how impacts web accessibility.
It's, a lot.
I get it.
So how about we start with JavaScript?
Why JavaScript?
Because we as developers love it and we use it like a lot.
So here's a graph that shows the distribution of the request by content type.
So JavaScript comes as second and at the median there are 22 JavaScript requests on desktop and 21 requests on mobile per page.
So this is some huge number that needs a little bit of an attention.
So another reason that I wanted to focus on JavaScript today is because JavaScript is expensive.
It's expensive compared to other resources.
What does it mean?
Executing JavaScript demands more computational resources because it's a CPU bound task.
And what that means is that the time that the task is completed depends on the CPU performance.
And this task can be challenging, and even problematic on mobile and low end devices, due to having, low CPU speed.
It's also important to look at its impact on the main thread.
The main thread, is where browsers processes various tasks such as parsing HTML, executing JavaScript, rendering the page, and handling the user input and everything.
And by default, browsers uses a single thread to run all the JavaScript in the page unless we use web or service workers intentionally.
This means that having many scripts running can cause delays in the event process and rendering, and that will directly reflect it in the user experience.
To have a user, responsive user experience, it is important to minimize the, main thread's workload.
The less JavaScript is sent, the lighter the burden on the main thread.
One of the first things that we can do to better understand our JavaScript consumption is, looking at out bundles and see what is in them.
There are great tools out there to visualize the bundles, to understand the content of it.
They really help finding out what makes the most of the space of the bundle.
Is there anything that is not used or got there unintentionally?
And I think overall, these resources provide a great start for optimization.
So what is interesting is that more often than not, you might find that there are unused modules in your bundle.
So here is another graph that shows the distribution of unused JavaScript.
The Medium mobile page loads 162 kilobytes of unused JavaScript.
And all of these numbers represents a very large amount of unused JavaScript, especially when we consider this analysis actually, showing the, compressed size.
It's the transfer size JavaScript.
So when actually, when we decompress them, the unused JavaScript may be a lot larger than what the, chart suggests.
And when it's contrasted with the total number of bytes for, mobile pages at the median, unused JavaScript accounts for 35 percent of all, loaded scripts.
I think this is fascinating to know.
This data might be interpreted in, multiple ways, but one of, the way could be that some pages are loading scripts that may be may not be used on the current page or are trigger interactions, by the user later on in the life cycle, or maybe got into the bundle unintentionally, which is not rare.
So coming back to our original angle and looking at performance through an accessibility lens, what does unused and excess amount of JavaScript, actually mean to our users?
First and foremost, downloading unused and unnecessary JavaScript is a wasteful data and can even set a barrier for mobile users who do not have unlimited data plans.
And executing will be slower and so much worse on, less capable devices or for those who are using, assistive technologies.
And also the, use of excessive JavaScript resources, let alone slowing down the execution, it can even leave entire groups of people out.
I think these are significant ramifications that needs, our urgent attention to look at our JavaScript bundle.
So one way to lower JavaScript load intentionally is using a technique called code splitting.
So code splitting is a technique that aims to split large bundles into smaller ones.
By leveraging dynamic imports so that we can lazily load our components, and Java, our JavaScript, when we need it, rather than including everything in one giant, one giant bundle.
And it's supported by modern browsers like, Webpack and Rollup.
And the main motivation behind this technique is reducing the initial bundle size and minimizing the startup time.
So code splitting is a good alternative for optimizing statistically important components.
Here's an example of a very simplified time tracking app that I drove that shows log times in a chosen week with the option to customize the graph styles.
And in this example, we load our components statically.
And especially if you pay attention to the pop up that displays a variety of graph styles, this also comes within our initial bundle, but we don't actually really need it until our user decides to change their graph style and they might not change it at all.
So there are various ways to split the code.
For example, we can do route based code splitting and load the bundle when the page is requested or using an intersection observer when they actually become visible in the viewport.
Or like in this example we can load them when the user interacts with them.
So reducing the initial bundle size with the code splitting helps a lot with responding to our users quickly.
Because after all, there will be less resources to download and execute, in the initial load.
And with this way, we will be able to provide some sort of content to our users in the earlier, earlier in the life cycle.
And it gives our users also the assurance that something is happening.
It's important because the longer they are with a blank screen or a spinner, the more uncertain they feel.
And because we don't, because they don't know what is happening.
They don't know, even something's happening.
And it's similar to being anxious, when we are uncertain, we feel, the longer times is, the waiting times is longer.
So if you look at tangible ways to measure how soon we render our first content, we can reference the, FCP metric, the First Contentful Paint, and it measures the time, from when the page start loading to when any part of the page's content is rendered on the page.
So responding quickly and rendering the first content is a great start, but it is very unlikely that this is what our users came to our page for.
So we need to render the rest of the content, especially our main content, as And I appreciate that it's a difficult task for browsers to know what the main content of our application is, and there have been various metrics, provided and suggested by the Google Chrome team in the past, but these days they recommend the LCP, the Largest Contentful Paint that measures the time from when the page starts loading to when the largest element So how, soon can we render the main page and how do we make sure that we render our largest content for paint earlier in the life cycle of our page?
Pretty much these days, like it depends on a lot of things, but it is also heavily influenced by how you choose to render our application, how we choose to render our application.
So let's have a look at one of the most popular rendering patterns, client side rendering.
So server sends the response, and then our response looks something like that.
React dump, that create React, create root, and render, and then our application, within our fancy React script, script board.
So we are rendering our entire application with JavaScript.
Is this bad?
Let's see the rest of the timeline.
After the browser receives the response, it downloads the JavaScript, and then it executes the JavaScript, then our application will be viewable.
And because the JavaScript is a CPU bound task, the time between the browser receiving the response and the time that our page is viewable will entirely depend on how good the user's device is.
This doesn't seem like a very inclusive experience to me.
And if we take this further, and look at another case where server sends the response, browser downloads, JavaScript, and something happens, and browsers fail to load the JavaScript.
And then we pretty much end up with a broken, blank useless page.
And I can't help but think you had one job, like literally just rendering the page.
And we are not even trying to make it interactive.
It's just viewable.
And we failed to provide to our users.
And we have no idea what kind of inconveniences we cause to them.
Is there an alternative?
With a different rendering strategy, we can eliminate the possibility of failing to render our page.
And it's generating our HTML on the server.
Let's see how it's different from the client side rendering.
So again, server sends a response, but this time our response looks more like this.
Return, open tag HTML, open tag in HTML, and then the rest of our rendered, markup.
And, so after the browser receives the response, that includes our pre generated static HTML, browser parses it and renders it, and our page will be viewable, in the lifecycle way earlier.
Then the, compared to the client side rendering.
So that's a huge gain for our users because they will receive the main content quickly.
And because it's a compassionate, because, it's a compassionate practice, we don't, because we don't really leave them in, leave them long in uncertainty with blank screens or spinners.
And also because we are taking care of rendering our page in our servers, this takes us a step further to provide a more equal and inclusive experience.
Because our users, how soon our application becomes accessible to our users won't depend entirely on their device capabilities anymore.
So server side rendering really helps with making the page viewable as early as possible.
But this doesn't really solve our entire problem.
Because we wanted to, build more interactive more interactive and rich user interfaces, to provide our users.
So to be able to do that, we still have some JavaScript to run on the client side.
And there's a stage called, hydration.
So it mainly, means attaching DOM, events to our static HTML that we generated on our server to make the page more, interactive.
It's crucial for our users to be able to view the page as early as possible, regardless of how powerful their devices are, but we also need to be mindful about the time between our page looks ready and our page is interactive.
And there are a couple of metrics, that specifically track interactivity.
TTI is a helpful one to refer to understand when our applications, get responsive and interactive.
So the TTI, measures the time from when the page start loading to when it's all main sub resources have loaded and it's capable of reliably responding to user, input quickly.
And the time that our application becomes interactive depends on the hydration strategy we choose.
Do we want it to fully hydrate or do we want to progressively hydrate?
There are quite a lot of patterns out there.
It's overwhelming.
And it totally depends on the application, the state of your applications and quite a lot of things.
But also that's a whole other topic.
One thing that doesn't really depend on the type of the application though is the amount of JavaScript that needs to be hydrated.
And this brings me to my next point.
So is less JavaScript possible?
Yeah, surely.
But what I mean here is that, is there a way to achieve the same level of interactivity with less JavaScript?
I wish I had some like drum rolls here.
But funny enough, the solution is not another futuristic library that takes up like less space in the bundle.
It's actually the quite opposite going back to basics and using native solutions.
Although it sounds like very intuitive, I don't think we leverage the native solutions as much as we should.
And most of the time, opting in using native solutions won't only help with your JavaScript bundle size, but it will bring you a lot of benefits to make your page accessible out of the box.
And there has been like really cool releases recently in the API and CSS space.
If you had a chance to watch the, what is new in CSS in 2023, talk yesterday, Steve talk, about really cool CSS features like trigonometric, trigonometric, functions and has property container queries.
You should definitely check out the recording if you, missed the talk.
So one of the recent releases, that I was actually very excited about is the popover API.
So your popovers are super common and we see them a lot in tooltips, menus, dialogues.
And even though they're super common, the building them is still a lot of manual work.
You need to add scripting to manage the focus, open and close states, keyboard bindings to enter and exit the popover, experience.
It's a lot of manual work, but the new popover API, really a lot of the benefits really comes out of the box.
So one of them is the default focus management.
So when opening the popover, it takes the next tap inside the popover.
And now after that light dismiss functionality, if it is clicked outside, the popover will close.
And then the focus is going to be back to the trigger element.
And similar to accessible keyboard bindings, clicking escape key will close the popover and return the focus to the trigger element again.
No more, globally listening escape event presses.
And last but not least, promotion to top layer.
So popover will appear on a separate layer above the rest of the page.
So rest in peace the z index.
And the popover example comes, from my favorite design system primer, GitHub's design system that I am feeling very lucky to be a part of.
So we recently refactored our React tooltip component to use the new popover API and I witnessed all of the benefits of this native solution, from the first hand.
And shout out to my colleague, Keith Cirkle, who tremendously helped me with this work.
And the browser support is pretty good too.
Firefox is still catching up, behind the flag, but it's coming soon.
Okay.
I'm just going to take a water break.
My next slide is about NodeJS.
So yeah.
How about NodeJS?
I know it doesn't sound very practical, especially for rich user interfaces, but hear me out.
I think we should care about how our applications behave if JavaScript, fails to execute.
I believe it will be an inclusive experience to provide at least the main content and the functionality of our applications, even if JavaScript is disabled or failed to load.
For example, if my, application is making, doctor appointments, I think my users should be able to fill up the application form, and be able to submit it.
And this is possible without JavaScript.
Forms and links can definitely be enhanced with JavaScript, but preferably should function without it.
There are common industry patterns that support this purpose, and they are progressive enhancement and graceful degradation.
In a nutshell, progressive enhancement aims to provide a baseline of essential functionality and the content, to as many users as possible while delivering enhanced experiences to users who have access to more modern devices and browsers.
On the other hand, graceful degradation is centered around modern web applications and rich user interfaces that will work in higher technology devices and use browsers, but still falls back to the essential content and the functionality, in older browsers and low end devices.
Both strategies are great, but totally depends on the case and the state of the application.
So inclusive web applications don't become that way overnight.
We need to put in the work and consistently show up every single day for change.
So here are a couple of practices that I found helpful, might be, helpful for you as well to develop performance in mind.
Checking in a PR pre request time, integrating an automated, job that reports the bundle size in your CI, so it could be, helpful for, as a regular reminder.
For example, we use this in my, team, this GitHub Action, and it reports the bundle size every time when there is a new PR created.
Maybe you can't practically do, take an action every single time, but it's, really important to be aware of, the changes around the bundle size.
And there are even more advanced tools out there that report a really good detailed, performance, report.
So this is from Calibre, they're an Australian based company and they do really, cool stuff in this space.
Included in your code review,.
This could be adding a merge checklist to your pull request to make sure we are not introducing any performance regressions.
If not, we are improving.
Or as a code review, you can add a couple of, questions to your, code review belt.
Is there any like maybe alternative solution for this NPM module or a smaller one.
If not, like maybe you've seen like statically important components, like looking for opportunities.
Is there any way that we can load this later when, user interacts with them rather than including in the bundle?
Educate yourself and your team.
If you're a manager, this is our web, this is our responsibility.
We got to do the work and test.
That's it with throttling the network, simulating devices that takes, like simulating devices, running lighthouse regularly to check in and read through the recommended resources to see if there is anything to optimize.
So a quick recap, technology choices matter.
They have a power to shape how people experience the web.
Executing JavaScript is a CPU bound task.
It is, its performance is heavily dependent on the device capabilities.
Be mindful of JavaScript consumption, because it doesn't come for free.
Check your bundle, what is in there.
Leverage code splitting for on demand, loading, and looking for opportunities, to lazily load components.
Opting in solutions, native solutions where possible.
HTML and CSS are way cooler than we know.
And aim for applications to function when JavaScript, even when JavaScript fails.
Remember, NodeJS is the coolest JavaScript framework.
And develop inclusivity in mind.
This is our web.
Thank you.