(upbeat electronic music) – Cool, hi, how is everyone? Good, yeah, who here has heard of the event loop before? Sweet, a few people.
And there’s a whole bunch of it sort of taking care of your HTML and the CSS and all of the rendering, the layout, the dom and all of that kind of stuff.
Now while this bit of code is like nice and succinct, it probably raises a couple of questions.
Like, what’s a task? What’s a task queue? How do the tasks get in the task queue? And we’ll start with the first one there.
In the context of the event loop, we’ve got our pauser here, it’s gonna create the task, it’s gonna put the task in the task queue and then the event loop’s gonna run it.
So there’s a couple of really important things here. First of all, the task queue, as the name suggests, is a queue.
Which means that things happen in the order that they arrive.
First in, first served.
So if we have a task and another task turns up, it has to wait for that first one to finish. So this is all well and good, but it does seem like a kind of complicated way to like, run code? Which, oh no, don’t be like that, oops…
It does seem like a kind of complicated bit of code, a way to run code.
There’s no concurrency locking, none of that crap, you don’t have to deal with it.
So, obviously, we don’t want to have to deal with like, locking and synchronisation on that.
The second thing though, is that browsers, they’ve got heaps of threads.
This is how that can work.
So, the way that that works, is, if we have, we’re calling a Web API, like setTimeout, and we’re passing in a callback function and a delay of three seconds.
Are we all happy with that? We’re done, cool.
Uh, no, it turns out, it’s a little bit more complicated than that. Because the task queue also operates in conjunction with the rendering pipeline. Now the rendering pipeline is responsible for everything that you see on the screen. So if you make changes to the dom, your styling, layouts, all of that stuff, the rendering pipeline is gonna take care of it. And the rule is, that, a task runs, and once a task finishes, then the rendering pipeline can run.
It doesn’t always run though because browsers, they’re pretty clever, actually. And they don’t like to do any unnecessary work. And the browser knows that the screen is only refreshing, well on your average screen, probably 60 times a second.
This one is running at 50 Hertz, so 50 times a second. So, I can’t do the math to work out that, but it’s once every 20 milliseconds, right? The example one is gonna run at 60 Hertz, so, um. (laughs) So the browser’s not gonna bother re-laying out the whole page and everything, because that’s an expensive operation.
It’s only gonna do it if it’s time to repaint. So if we have a task running, that task, once that task finishes running, the rendering pipeline has the option to run but it’s gonna wait until that 16 milliseconds has passed. Obviously, sitting there doing nothing is a pretty dumb thing to do.
And like I said, browsers are smart.
So they’re actually going to run a couple of tasks, right? They’ll paint, and they’ll run through a few tasks, until the 16 milliseconds comes up.
One thing to remember though, is the tasks can’t be interrupted.
So if a task is running, when that 16 milliseconds hits, tough luck, you have to wait until it finishes. It’s gonna keep, like the task will finish and then the rendering pipeline will run.
If it’s a couple of milliseconds here and there, it’s no big deal.
But if you’ve got a lot of tasks that are running over that 16 milliseconds, or you’ve got a massive big task, that takes way longer than 16 milliseconds, your browser is gonna start dropping frames. And (laughs), it’s gonna start running a bit janky and it’s not gonna be a great user experience. (crowd laughing) The good news is, there’s ways to deal with this. One is to break your tasks up into smaller tasks. Which is something we’ve done here.
So this is a function that is just gonna run some kind of callback, some number of times.
So we could pass it and say, I want to run this callback 10,000 times.
Obviously that’s gonna take a really long time, we don’t wanna block up the rendering for all of that time so what we’re gonna do is just run the callback once and then we’ll create a new task using setTimeout, which is gonna call the same function again but say, run it 9,999 times, and then it’s gonna run it once and then create a new task so you’re gonna end up with 10,000 different tasks. Which means that the browser has an opportunity to repaint after each of these tasks.
This is one way to do it.
It’s kind of brittle, I mean it works in this example, assuming that your callback itself isn’t gonna take a long time.
It’s not the best way to do it though.
The best way to do it is to use a web worker. Which are available in all browsers, even like, shitty browsers like IE, you can use it. And they are absolutely guaranteed not to block up the rendering pipeline, hooray! So, how do we know that a web worker isn’t going to interfere with our rendering? It’s pretty simple.
If you’re anything like me and you spend your weekends reading specs, you’ve probably seen this, quite, clear bit of text here which explains it simply, right? Each WorkerGlobalScope object has a distinct event loop, separate from those used by units of related similar-ori, yeah.
(laughs) what it’s saying is, each web worker has its own event loop. So, they’re essentially running, completely isolated from the browser window, so we’ve got stuff going on in the browser here, callbacks, a bunch of setTimeouts, whatever and then we’ve got stuff going on in the web worker, those are two separate event loops.
They’ve got two separate task queues they’re just gonna run completely separately. I mean they’re not completely isolated they can talk to each other, ’cause otherwise it would be a bit pointless. You can send post messages between them.
But they don’t share any data structures.
So even if you pass data between using post message, it’s um, copied not shared.
And web workers aren’t the only time that you might have separate event loops like this. If you’ve ever looked at your task manager, like if you’ve got a few tabs open in Chrome, you would see something like, oh, sorry, the web worker! (laughs) The web worker event loop is a bit different to the browser event loop.
For start, it’s got no user interactions it’s got no rendering pipeline, and there’s no dom, so party! Um, yeah, so like I was saying, that’s not the only time that you would have separate event loops.
If you ever opened your task manager, well you’ve got a lot of tabs open in Chrome, you would see something like this.
So (laughs), that’s 40-something processes. And the reason for that is that Chrome opens every tab in it’s own process, right? And if it’s running in it’s own process, that means it has to have it’s own event loop because it can’t share that data.
So that means that each tab is isolated from all the other tabs, which can be good for performance of the things running in the tabs and I guess helps with security.
They don’t have to do it this way though, if you opened all of the same tabs in Firefox you would get six processes.
So one is the main browser process, one is running extensions and up to four are running the tab contents. So if you’ve got more than five tabs open, then that means that some of those tabs are sharing a process, they’re sharing an event loop. Obviously, that can have implications for performance within the tabs but it does mean that you actually have some RAM left over for other stuff.
So generally speaking, it’s up to the browsers, and what the browser manufacturers think is gonna be the best user experience to decide whether they want to have this shared model or if they would rather keep everything isolated and separate.
I don’t know what is going on in that photo there (laughs). Um, there is an exception to that rule though. And that is things that are sharing a browsing context. So if you have something in an I-frame or you are running something in a, like a child window, those two things will share a, they will share an event loop.
So, we’re to do that here, let me just make this a bit smaller, right, so this button has the single purpose of making things run slowly.
It’s just gonna run a loop like a while loop for five seconds so it’s just gonna block up the rendering engine. You can see, if we click this, wait, let’s wait for the dinosaur to come back, not only does it go up itself, it also janks up the dinosaur.
So he’s gonna stop for the five seconds while that’s running because these two things are sharing an event loop so it means that that page can’t keep rendering while this one is broken.
There is a good reason for that.
And that is that these guys share a data structure, right? If I make this a bit bigger, make it a bit bigger this way, ahh! Right, so we can go, window.opener.document, body, background, oh style, style, does anybody have a favourite CSS colour? – [Audience Member] Rebecca Purple.
– Rebecca Purple.
You can see that we can change the colour of the window, the main window.
The reason that the dinosaur animation background hasn’t changed is that it’s actually running in an I-frame, that turns out not to be a problem. (keyboard typing) Oh, what is it, document.
Content document, document content, who can remember? No, it’s content document, isn’t it? (crowd laughs) Alright, who else had a colour? – [Audience Member] Hot pink.
– Hot pink, oh I fucking used that (laughs), (crowd laughs) alright, window.opener.document, surely I have typed this before.
How about blue, did somebody say blue? Cool, blue (laughs).
(crowd laughs loudly) So you can see these two windows are obviously sharing this data structure here so they, they need to be running on the same event loop. This might look like a terrible security risk in reality, it’s not really, because these two things are running on the same origin and presumably, you’re not gonna try and hack your own site.
Up until earlier this year, there was a small security risk in that the child window could change the location of the parent window, I think all of the browsers have fixed that, though. So it’s not really a security risk.
But, it is a performance risk, right? You probably do wanna keep the windows isolated especially if you’re opening other peoples’ windows. The good news is, that there is a simple way to do that.
I’m just gonna change the background colour back (laughs). We can use this, rel=”noopener”, now if we open our window again, we can see we can make things slow and the dinosaur is still going to happily go along. Was there somebody here from Mozilla? Yeah, can you explain why this dinosaur’s arms look like they’re broken? (crowd laughs) (audience member shouts) (loud laughing) Cool, so yeah, you can see now if we tried to do that same thing as before, if we do window.opener, it’s just gonna come back null. ‘Cause these two things are no longer sharing any data structures they’re just doing their own thing.
Cool, so we make this big again.
And, here, so now our event loop looks something like this. It’s still an infinite loop, it’s gonna run forever, each turn of the loop, we’re gonna grab the first task off the task queue and we’re gonna run that task.
Then, if it’s time to repaint, we’ll repaint. Which is nice, but, it turns out a little bit more complicated than this. Because, again, if you spend your weekends reading specs, you would know, an event loop can have one or more task queues.
So at this point, I’m preparing the talk, and I thought, what I’ll do, is all go and I’ll have a look at one of those browsers that is open source and you can look at the source code and I’ll have a look at how it implements the event loop and I’ll see how many task queues it’s got and how it manages them and well, I have to level with you.
And C++ it turns out, is mostly punctuation and the word “delegate”.
(crowd laughs) And, no idea what’s going on there.
So instead, we are gonna look at a theoretical browser with multiple task queues.
Uh, to be clear though, it is the example that is described in the spec, so it’s not like I just made it up. Somebody else did.
(crowd laughs) So here we have our event loop with two queues. This browser prioritises user input, so it’s got one queue here that is gonna take user input and the other queue is gonna have everything else. Oh, sweet.
And the rule is basically, it’s gonna run everything in that user input queue until it finishes and then it’ll run things in the other queue. You can see though, the rendering pipeline isn’t affected at all. It finishes a task in either queue and then it runs the rendering pipeline.
Um, yeah, I mean it’s like the, the business class queue at the airport, right? Where the person will serve all the business class people and then the once all the business class people have gone they’ll serve all of us, economy-class plebs. So it’s pretty straightforward.
There are a couple of rules though, with multiple-task queues.
The first one, it’s not really a rule, it’s kind of the opposite of a rule.
But the queues can be executed in any order. So they could go three from the first one, first queue, four from the second queue, except on Tuesdays when they do it the other way around. It’s up to the browser how they implement that. The actual rules though, are that the task queues are still queues, right? So things that arrive in a queue still have to be executed in the order that they arrive and things, tasks from the same source have to go in the same queue.
So you can have a queue for all of your timers, which is actually what Node does.
But that means that all of your setTimeout callbacks have to go in that same queue.
Cool, this probably isn’t going to affect you as a developer at all, it’s just nice to know, I guess.
But now this is what our event loop looks like. So it’s still an infinite loop, each turn of the loop we’re going to pick a queue and then we’re gonna take the first task off that queue and we’re gonna execute that task.
Then if it’s time to repaint, we’ll repaint. Sweet, turns out though, it’s a little bit more complicated than that (laughs). There’s also microtasks.
Does everybody remember The Land Before Time? – [Audience Members] Yeah. (laughs)
– It’s not on Netflix.
(crowd laughs) So microtasks are basically promise callbacks. Um, is anybody using mutation observers? Cool, so they’re also microtasks.
But yeah, most of the time when you’re talking about microtasks, it’s gonna be promises.
So, we’ve got our event loop here now.
The microtask queue there is in yellow just next to the rendering pipeline and we’ve got our script is gonna run, it’s gonna have a promise that resolves.
It will go in the microtask cube.
Now the microtask cube is a bit special.
Um, it’s gonna run after each task finishes. So, if we have a task, then we have a microtask. The microtask is going to run.
Everything in the microtask queue is gonna run so it’s gonna run all of those tasks and if more tasks get added, it’s gonna run all of those tasks and as you can see, the rendering pipeline has to wait. So, I’m at, I’m sure you can imagine that that can cause some problems.
So if we have a look at an example here, we start off with our tasks work.
So this is a function that’s just gonna generate tasks, infinitely. It’s essentially an infinite loop but with tasks. It’s similar to what we were talking about before about breaking the tasks up.
So we click start, it’s gonna run this function, it’s gonna find a document with an ID of count, and it’s gonna set the innerHTML of that document, of that element, to the number of tasks that have run and then it’s gonna create another task which is gonna do the same thing again.
So we do that, we can see, it starts counting up but everything is okay, even though we’re essentially running an infinite loop, we can still interact with the page, we can click stop and it’s gonna stop and everything is great.
Now, if we do that same thing, hang on, I need to make this small again.
It’s almost like I know something’s gonna go wrong. (laughing) if we do the same thing with promises, we have a function that’s gonna find that same element on the page and it’s gonna set the innerHTML of that element to the number of microtasks that have run.
And then it’s gonna create a new microtask using promise.resolve.then, which is gonna call itself again, in a loop. And if we do that, we can immediately see, everything’s gone to shit.
(crowd laughter) So the button has clicked but it hasn’t come back. There’s a pretty fancy button, so hey.
(crowd laughter) You can see the cursor is still the little hand from pointing over the button and I can’t select any text, I can’t click stop. This page is borked.
Um, if we leave it running long enough, Chrome will realise and it will stop it.
There we go.
Exit page, thank you Chrome.
Alright, this can grow big again, so what’s basically happened there is we’ve had our little script run then we’ve clicked on the button and the callback of the button, gosh that script runs slowly, the callback of that button has created a promise, or a promise callback and that promise callback has created another microtask and that microtask has created another microtask and it’s just doing that forever so you can see the rendering pipeline can’t run even if we interact with the page, nothing else can happen because we’re stuck just processing that microtask queue.
So my advice to you would be, don’t set up infinite loops that make microtasks. (crowd laughing) So now we can see our event loop looks like this. It’s still an infinite loop.
Each turn of the loop, we’re gonna pick a queue. We’re gonna take the first task off the queue, we’re gonna run that task, then as long as there’s microtasks in the microtask queue, we’re gonna run all of those.
Then, if it’s time to repaint, we’ll repaint. Cool, turns out though, it’s a little bit more complicated than that (laughs). There’s also the animation frame callback queue. You can add things to this queue by calling requestAnimationFrame and passing it a callback.
But why would you want to do that? Well, say we wanted to do a really exciting animation like have a square that moves along a path described by a sign wave.
So this square, the position, just depends on the time that’s passed, right? The x-position is just the time and the y-position is just the sign of the time. Sign of the time, I (laughs).
As a naive implementation, we might do something like this. So it’s just a loop that’s gonna run as long as the right-hand side of the box hasn’t reached the right-hand side of the screen, we’ll work out how much time has passed, and then we’ll calculate the x and y positions of the box. If we do that, we’ll get this.
Which is just basically the box at the final point of the animation.
Reason for that, is that we’ve basically just created one massive task, and that’s just gonna run and it’s gonna keep calculating the position of the box but the rendering pipeline never runs.
So the box never gets updated and so we’ve finished when the box is at the edge of the screen already.
Right, and then the pipeline will run.
Obviously, that’s not what we’re after.
So we could try a slightly improved version where we break it up into tasks.
So, once again, we’re gonna work out the amount of time that’s passed, we’ll calculate the x-position and the y-position of the box and then we’ll create a new task that’s gonna calculate it again so that’s, because it’s a new task, it’s giving the rendering pipeline a chance to run. So that’s gonna look something like this.
We’re gonna create a task, it’s gonna calculate the position and create another task which will calculate the position and so on and so forth.
And eventually the rendering pipeline will run and because the box hasn’t made it to the edge of the screen yet, it will keep moving along like that.
Now you can probably see the disadvantage of doing things this way, which is that you end up calculating the position four times for every time that it actually updates.
Which is, a bit wasteful.
So we can fix that by swapping a setTimeout for requestAnimationFrame.
Which, will work like this.
So we have our script running and it calls requestAnimationFrame, that animation task will go in the animation queue, there in the green and then when the rendering pipeline is ready to run it will run the animation tasks first and then it will run the rendering pipeline. Like the microtask queue, if there’s a couple of things in that rendering pipeline, in that animation queue it will run all of them. But, if we add a new thing, it won’t run that until the next loop around. The reason for that is that we wanna use it like we just did in our example so we wanna create a task that does this frame and then we wanna set up the one for the next frame while we’re doing that first one, right? So we don’t want that second one to run straight away, we want the rendering pipeline to run and then the next one to calculate and it will, so on and so forth to make a really amazing animation.
So we can see here, the top box is animated using a requestAnimationFrame, and the bottom box is animated using setTimeout and they pretty much look identical except for the, you know, colour.
The big difference though is you can see requestAnimationFrame has run 687 times and setTimeout has run 2,750 times, so that’s a bit over four times as many tasks. The reason it’s only four times as many tasks is because browsers actually throttle calls to setTimeout, so you can’t do it more than once every four milliseconds, if we created our tasks using something else like post message, it would’ve been a much, much higher number.
The other thing here is that these have worked the same because there’s nothing else going on on this page. There’s other stuff going on on the page, they could be any number of other tasks in the task queue ahead of setTimeout, so that could push out the rendering again and start making it look janky like the dinosaur with it’s jaw falling off (laughs).
So yeah, if you make, if you want animations or even if you’ve got stuff that’s just updating the dom, it’s a good idea to batch it using requestAnimationFrame for performance. So now, our event loop looks like this.
It’s still an infinite loop, each turn of the loop, we’re gonna pick a queue, we’re gonna take the first task off that queue and we’re gonna run that task.
Then we’re gonna run all of the microtasks in the microtask queue then if it’s time to repaint, we will run all of the animation tasks that are currently in the queue but not any that get added later and then we’ll repaint.
That is, actually, about as complicated as it gets. Which is nice, but, what about Node? Well, it turns out, some good news, Node works much like the browser.
It’s very similar but a bit simpler.
For a start, it doesn’t have WebAPIs obviously because that would be weird.
But it does have the unicorn velociraptor library, or a libuv, which provides basically the same functionality as WebAPIs but has a way better logo.
(crowd laughs) Like I said, the Node is a bit simpler than the browser. There’s obviously no dom, so we don’t have to worry about any of that crap. There’s very limited user interaction so you don’t have to worry about the users clicking on stuff jumping in, interacting with shit when you don’t want them to and there’s none of this sharing windows so you don’t have to worry about any of that which is pretty good.
Node also, the Node event loop also isn’t an infinite loop like the browser.
So the browser’s infinite loop is gonna keep running forever and ever, Node, though, is just gonna run through the loop, if there’s more stuff to do, it’ll run through again, and then it’s done, the process will end.
Who remembers the thunderbolt rollercoaster from Dreamworld, up in Queensland anyone, yes (laughs)! You saw that home video, it’s from 1999.
So it was a while ago.
So, this is our Node event loop.
It’s got three main queues, well they’re actually called phases in Node just because of the way that they execute.
So there’s one for event callbacks so disc I/O, network requests, that kind of thing. There’s one that’s called the check phase which I will get into in a minute and there’s one for the timeouts, like I said before. So it’s setTimeout, it’s that interval will go in that one. Knowing that it runs is, Node really likes events. So, it’s gonna hang around for a bit and see if any events turn up and then if event callbacks turn up, it’s gonna run them. And it’s gonna run everything in the queue. It’s just gonna keep going until that event callback queue is empty.
Which will be soon.
Then it’s gonna run the check phase and again, it will run everything in the check phase.
Once that’s done, it will go on to the next phase, which is the timer’s phase, so it will run everything in that timer’s phase. And then it’ll go back to the beginning.
So then if there’s more events have turned up, it’ll go back and it will run through those. And then nothing else is going on so it will just end.
The check phase, I promised I would explain. The check phase, you can add stuff to the check phase queue by calling setImmediate and passing it a callback. It essentially does the same thing as setTimeout with a timeout of zero.
Except it’s not throttled the way that I said before that setTimeout is, so you can call setImmediate as much as you like. Also, if you call setTimeout and you also call setImmediate, like in the same task, the setImmediate callback will always run first just because the order that the queues go in. Cool, so those are the main queues.
It also has microtasks, so if we have a, if we have a promise, then it’s gonna go in the promise microtask queue and that will work just like in the browser. Obviously it’s not gonna block up any rendering because that’s not a thing.
But um, yeah, just works the same as in the browser. There’s also this other queue which is also a microtask queue, which is the nextTick queue.
You can add things to the nextTick queue by calling process.nextTick and passing in a callback. Um, why does it have two microtask queues? I don’t actually know.
The process.nextTick queue predates promises in Node, so it was around first, it’s intended to be used as kind of a, like a way of doing error handling I guess. If you’ve got a process that you want to finish and then you wanna wait for that to finish and then do some error handling afterwards, then you can do that within the same task, alright. It’s, I’m not a Node developer, I don’t know what Node developers do.
But it’s there.
Um (laughs), cool, so Node, like I said, much like the browser, a bit simpler though. And it’s got those two new things, the check queue and the process.nextTick queue, if you are having trouble remembering how those work, it turns out, it’s very simple.
setImmediate means do something on the next tick and process.nextTick means do something immediately. (crowd laughing) Why are they named like that? Naming things, right, is one of the two hard problems in computer science.
You guys know the two hard problems in computer science? Naming things, cache invalidation and off-by-one errors? (crowd laughs) Cool, so, this is our Node event loop and I promise this is the whole thing, it doesn’t get any more complicated than this. Uh, so as long as there are tasks ready to run it’s going to keep running.
Each turn of the event loop, it’s gonna grab a queue then as long as that queue has tasks in it, it’s gonna run, it’s gonna grab the first task off that queue and it’s gonna run that task then it’s gonna do everything in the next tick queue, then it’s gonna do everything in the promise task queue. And that, is pretty much event loops.
So, take home messages.
Don’t block rendering.
Uh, just don’t have long-running tasks, running in your main thread.
Use web workers, like I said, even IE 11 supports them, so. Always use rel=”noopener” when you’re opening up a child window, just to keep that separation between the two windows. Promises beat tasks, so if you’ve got some weird timing issue going on with your promises and your other things, it’s because your promise callbacks are probably gonna run before any other tasks, so they’re going to win.
And if you’re animating things, absolutely use requestAnimationFrame because that is what it’s for.
Um, also, and I hope this is something that people can go back and look at, if you have code with this kind of thing in it that breaks if you take out the setTimeout, or if you’re an old-school angular developer and you’ve got this, hopefully now, you understand what that’s doing and you can maybe have a look at that code and it’s not magic anymore.
Perhaps you can work out if that is really the most appropriate thing to be doing. But most of all, I hope that now, you’ve got a much better understanding of the event loop and you all feel a bit like clever girls.
(crowd laughs) Thanks! (audience applauds) (upbeat electronic music)