Learn How Russian Doll Caching Can Improve your SSR Web App

Hi, thank you for having me.

Today.

I want to discuss little topic, but I believe it is very interesting because not only is it simple but it also can make a huge difference in your SSR application performance.

So this technique is called “Russian Doll Caching”.

Maybe, you know it already, maybe you haven’t heard of it, maybe you master of it?

But I hope in today’s talk I can entertain while also giving you some useful information that you can directly implement in your own company at work, right?

So let’s get started.

So I have a few agenda today.

One is: I want to tell you the story behind it, and then I want to discuss about N+1 queries, I want to discuss about the importance of the Russian Doll Caching as well as how to implement it, using a very simple, a quick demo.

And then lastly I want to discuss about the implementation in another framework that I’m not showing today, but can be implemented in depth, a framework or language as well.

So let’s get started.

So this – back in 2020 – at a time I was interning in a company that I am working at now called Mekari, and then this company is very interesting because it is a SAAS company.

And then there is a lot of multitenancy there because it is SAAS.

It’s serving a lot of organization or business.

And I interviewing there.

And then one of the interviewers – he’s a principal engineer – and he asked me a very interesting question called: “Do you know, what is N+1 queries?

And I said, well, I don’t know.

Maybe I have heard of it.

Maybe I know it, but I know it in different words, in a different terms.

And he said, well, okay.

And the rest of the interview is good.

I can answer, like, I think collect all the question and I’m feeling good about it.

Maybe fortunately maybe, unfortunately I’m getting accepted, I’m getting accepted.

Now, I need to maintain one of their products – the accounting products.

And then in that accounting product is basically a magestic monolith, magistic monolith means like a huge monolith where there is a lot of like, functionality backed into the applications of the product.

And then.

Somehow because like any other startups initially, you want to go fast, but eventually you want to improve your performance such that you can cater to all the customers.

Right.

And In that regard, what happened is there is a lot of like slow query and then everywhere people say N + 1 query that’s N + 1, I improve N + 1 and all N + 1, I feel like boom, everything is N + 1, what it is actually.

And then is it really bad?

It seems that everyone is doing like improvement, by removing the N + 1 and then they feel good about it.

And then.

Because I’m not, I’m new to this company.

And then what happened is, I tried to learn what N + 1 is, and that I understand it.

N + 1 query is, for example, if you have.

Like this model called cars.

And then you have the association called wheels and you want to get all the wheels belong to this cars what happened is you queries all the cars and then you queries the wheels.

And then you do basically N + 1 because you get all the cars and then you get all the wheels, N is all the wheels.

1 is the cars and then to improve it, you can do two things.

You either do all the join, all the subqueries and then you get wheels associated to the cars, directly, or you can also do two times where you get the cars first and then you get all the, wheels which you can use in class, which basically all the wheels like in this id.

So that’s happened.

That can be happening, but I’m not really sure if that’s the right way to solve it.

Because there is a dilemma, right?

If you want to make your application faster, you can reduce the number of calls by using the, I call it one query.

But, in one query, what happened is, if your query is big or it is helping, it is doing a lot of join, your query become like a bottleneck to your database because it is only one query.

But if it’s huge, it can makes a difference in your performance.

And in the other side, there is like, N + 1 query, which is you query every single thing, which is very fast, like in a particular query, but it is a lot.

So you need to like balance between the two.

In here.

I’m looking around and looking for a solution.

And I found that there is a cool book called The Complete Guide to Rails Performance by network OPEC.

And then this is basically helping me often, calm my imposter syndrome, because I think, I’m not that good as a tech lead and then I want to learn more.

Right?

But the problem is like the very first thing I saw after buying the book, I get this bonus video and I saw it all.

And then the first video is about DHH saying that N + 1 is a feature in Rails.

Boom.

Uh, I feel like, boom.

So why, like all my friends saying that N + 1 is bad, but this guy, the creator himself said that, N + 1 is good.

I think again and that I do, quickly search and turns out there is this discussion on twitter saying that the reason why, DHH can have like, two or three times the performance compared to the average performance that the company that network OPEC working with is because Russian Doll Caching . And that is because Russian Doll Caching utilizing the N +1 query, which is a feature in Rails.

That’s interesting.

So let’s take a look.

So what is Russian Doll Caching itself?

So Russian Doll Caching is a technique where you want to maximize the cache hits.

So imagine you have a nested, like maybe I said a component like, and then like the innermost has cache the outermost, which wrapping the innermost having cache, the outer outer also has cache and so on and so forth.

And the idea is, they want to maximize the cache hits by basically like having all the cache.

And then if there’s only one record the changing.

That particular cache will be invalidated, but the rest is still valid.

So how we do that?

So, it, turns out , it is possible.

I’m going to show you today , in a very simple demo in Rails application.

So, let’s take a look.

This is this a very simple implementation.

So this is a very simple app.

You can go to the GitHub link that I showed you previously.

I will also share the slide to the organizer for you to see, but the idea is, imagine you have a lists models or list tables, and then that is a to do a table or to do models, which basically belongs to list models.

So I can have lists about Web Directions preparation.

I have lists of, for example, all my working to do.

And then I can cache the inner most part, which is the to do by using cache to do.

And then caching the individual, like list items, and then that can work.

Imagine now I have all this data.

I reload, I got all the fragment and I read all the fragment.

And the second thing is when I do reload a second time, what happened is only, two things loaded because half the thing is already cached.

Right.

And the next time is I can also caching only the inner most, but also the outer part using a very simple techniques, simple helper in Rails.

You can basically, when you render a partial, you can say that this partial is cache.

I’m sorry, there is a table there.

It will be cached.

And then, in here, what happens is when you want to invalidate a particular record, you want to make sure that in the model itself it’s want to touch the parents when they’re for any update.

If it’s a be able to touch the, uh, what is it?

Uh, the parents the, updated app and then in here, if I implementing that, what happened is, initially it can load like a lot of things.

But the second time, it is becoming very, very fast because it already has all the, it has all the cache.

And then there is a feature called, uh, multi-get in Rails, which basically getting like all the cache at once to read this or to memory, instead of going to the memory again and again, and again for each part of the cookies, because now the cache can be like, represented by one, uh, like outer partial.

Right?

This is very interesting.

And the other thing I want to show you today is basically, uh, when do you want to update the models, it can make sure that all the models is changing, such that when you, when you update the particular model, for example, in our case, there is a to-do app, but the first two do, and then if I reload.

Uh, the cache hits is, only one cache hits missing, but the rest is still the same.

The rest is still hits only one miss.

Okay.

And that’s very good because now we can maximize our hits and we can keep our cache warm by having one cache.

We can make sure that our application is fast thoughout the day.

So that’s, that’s the interesting concept.

And the next part is this thing it’s not only available in Rails, it can be implemented any application.

I believe you can do it in Rails.

I know somebody do it in Laravel, it is a widely known technique in the Sr.

World.

And I think, if you happen to use Rails, I think you can try this technique and I hope that you can make a huge gain to your performance.

That’s it for today.

I hope you enjoy my talk and that if you have any questions, feel free to ask.

Uh, I think that is a way to ask a question in a chat and then I’m pretty happy to help.

Thank you.

Russian Doll Caching

When N+1 query is a feature

Overview

  1. 1. A little story
  2. 2. N+1 Queries
  3. 3. Russian Doll Caching
  4. 4. Reference

A Little Story

An Interview Question

Do you know N+1 queries?

Image of Dwight from the television program “The Office” overlaid with the question:
“Am I Ready for this Interview?”
“False. The Question is, is this Interview Ready For me?”

Getting Accepted

Maintaining Majestic Monolith

Mashup image of an Orca Whale overlaid with a cat’s head jumping out of the water with the caption:

RAILS 5 MONOLITH
SOOOOOOO MAJESTIC

This is Not JS

Optimizing UX when the app is monolith application.

Image of Buzz Lightyear and Sheriff Woody from the Toy Story animated film with the caption:

N+1 Queries
N+1 Queries Everywhere

N+1 Queries

Stupid Intro to N+1

SELECT * FROM Cars LIMIT 25;
SELECT * FROM Wheel WHERE CarId = ?
— OR
SELECT * FROM Wheel;

N+1 Dillema

Stupid Intro to N+1 Cont.

IF WE QUERY A LOT OF DATA -> A SINGLE QUERY MIGHT VERY SLOW

IF WE SPLIT THE QUERY TO MULTI QUERIES -> THE DATABASE CALL WILL BE VERY HIGH

Image of a Chin-scratchy “hmmmm” emoji

Impostor Syndrome

This books is all I need

Image of Nate Berkopec’s book: “The Complete Guide to Rails Performance”

Screenshot of an online video with David Heinemeier Hansson in an interview with Nate Berkopec discussing Rails performance

Image of a Twitter thread between Nate Berkopec and David Heinemeier discussing Russian Doll Caching

Russian Doll Caching FTW!

Git REPO

empeje/
russian-doll-caching

<% cache todo do %>
  <li>%= todo.description %></li>
<% end %>

empeje/
russian-doll-caching

List of ‘To do” items under various categories

Code demo for Russian Doll Cache

empeje/
russian-doll-caching

Code demo for Russian Doll Cache

empeje/
russian-doll-caching

Animated code demo of the “To Do” lists from the earlier slide loading rapidly after Russian Doll Caching is applied

empeje/
russian-doll-caching

Not Only in Rails

– RUSSIAN DOLL IN LARAVEL
– RUSSIAN DOLL IN DJANGO
– RUSSIAN DOLL IN AWS
– YOUR NEW AWESOME
FRAMEWORK USES RUSSIAN
DOLL!