Hello, and welcome to my presentation about multicore JavaScript.

The past, present and future I today during the course of this presentation, I will try to give you some insight into JavaScript.

What I mean is that like many other beautiful things, JavaScript is constantly evolving and it's, quite normal to feel lost.

It's quite normal to look at all these things that are happening within the world of JavaScript in complete isolation and feel, intrigued about why things are happening a certain way.

But, at the same time, I feel that it's quite easy to lose the bigger picture which is why are these things happening and what are the different trends and, the different directions that the language is evolving into?

I cannot, of course go through all of it.

But I, will like to explore through the course of this presentation one of these trends, which as I said is multicore JavaScript.

Before we begin, however, I would like to take a moment to congratulate myself.

I am your presenter.

My name is Ujjwal Sharma, but I'm commonly known on the internet as @ryzokuken, and you can find me on Twitter or GitHub or whatever social media platform you prefer.

I'd probably be there by that username feel free to berate me or you know, send me questions.

I.

I'm a compiler hacker at Igalia.

If you don't know about Igalia we are a free software consultancy.

So over there I work on compilers and programming languages.

And as you might have guessed JavaScript, mostly I am also a co-chair person of TC39.

So that's this top of my list of, some of the most descriptive names ever.

But if you don't know what TC39 is TC39 is a technical committee, the 39th technical committee of the European computers manufacturers association.

Now that doesn't do anything either It is a committee of, people who are invested in JavaScript, like yours truly and, hopefully you, and like to evolve the, programming language and, work on the standards around it.

So we work on new features around the language around testing of, different engines and, browsers and so on.

We, I am also one of the co-edit editor of ECMA 402.

This is the ECMAscript.

This is the ECMA script, internationalization API.

So, so if you've used that, then you've known some of my work.

I also am a core collaborator in the NodeJS project.

But apart from that, I am from New Delhi in India and I now live in A Coruña in Spain.

It's quite sunny today.

As it is mostly always.

It's, there's been a heat wave But I love dogs.

So I have I know this is an online conference, but if, you have a dog, then I'm gonna pay you a visit.

No but one thing before I start one thing that I want to mention is that this is in no way something that I've tackled or am working on by myself.

This is the collaborative effort of a number of the, some of the most skilled engineers that I know.

So different organizations, including Igalia and Bloomberg and Google have come together to, to work on this next iteration of JavaScript that I'm gonna talk about in just a bit.

So diving in one fact that I might wanna rub into your face a little bit sorry in advance is that JavaScript is kind of ancient.

And, if you think about it, you start to realize it, right.

I mean a, lot of these programming languages that we compare ourselves with a lot of our contemporaries, like even Golang or Rust and Zig you know, now there's Carbon, all these programming languages are, quite new.

If you compare them to JavaScript For example, one, one of the first few users of JavaScript.

But, jokes apart JavaScript was conceived which is a word in itself in December of 1995.

A little bit of context.

I was born after this fact.

So you might or might not be but you will have to agree with me that 1995 was years ago, right?

Back then, computers kind of looked like this and, worked like this.

So the top of the class Intel Pentium Pro that was released that year had one core and it had maximum clock rates up to 200MHz and AMD equivalent had sort of slightly lower clock rates, but also had a single core.

However, these were like, you know, top of the line processors that were sort of being released back right then the, reality is that for a lot of us we, didn't use exactly these processors probably.

I, know I didn't, but even years later we would use customer processors like you know, Intel Celeron on is, what I used in my home computer.

When I was little years after 1995, it was also a single core processor.

Here's his tiny picture of, me playing on that Celeron computer with my older brother.

I couldn't beat him.

Hey I, can build video games now.

So Today's computers by the way, not to get sidelined by that are very different.

So for example the, new, is it even new anymore?

But yeah, the newer Samsung Galaxy S23 has eight cores.

Like the CPU in that phone has eight cores and, we can go into the details of what each of these scores are, but that's belaboring the point.

And well, if you use one of those newer MacBooks it might have like, 10 cores.

So, so all of these new ARM processors and, even non ARM processors have a number of pro cores.

And, we're starting to get like you know, Intel X 86 CPUs like the 12th gen.

I think it is right.

The newest Intel processors have, a number of cores.

So the, Computing industry evolved into quite a different corner.

I, feel than, where we started off, right.

Like I, CPUs used to be single core or, maybe more but they used to mostly scale by frequency.

And, I remember when I was younger, people would, care a lot about, well how many Ghz like, how many GHz of juice of CPU do you have?

Now we don't care about that anymore.

So much because CPUs don't scale by frequency not today, but let's talk about concurrency.

So.

Now that we have discussed, like the, sort of shifts in CPU design, how does this affect concurrency?

How does this affect parallel programming?

So a quick primer, I'm really sorry if, you know all of this but, just to get everybody on the same page, imagine that I wanted to get a bunch of tasks done, like, like these four tasks each task took a different amount of seconds, maybe some time to perform, right?

So A, B, C and D.

And I could do them all as you'd expect one after another in order.

And that would take me a certain amount of time, which is A plus B plus C plus D, or I if, they were not dependent on each other, if they were completely independent, I could try something more interesting.

I could try something else.

If I had the ability to perform multiple tasks simultaneously, that would make things faster.

Of course I could.

I could theoretically cut down the, time in half of course, as you can see from this diagram, it's not actually exactly half it's, it's the maximum of A plus C, which is two of the tasks that I club together and then B plus D the, problem might be quite clear to you at this point.

But basically during the course of this presentation what's happening here is what we will refer to as the thread-like model of computation.

If, you know if you have a little background about what threads are and, what threaded programming is, this might be quite familiar to you, but essentially the idea is that we have two threads of execution and they're both working simultaneously.

As you can see one of the, most common pitfalls of thread-like programming is this right?

If I organize tasks just a little differently, it's still not completely half and half.

Like it's of course that's theoretical, but not so easy practically.

Although over a long course of time, you could try to do that.

That said the idea is that just a different configuration of what goes, where changes things like, like this and, now as you might say, there's a certain amount of time that I'm gaining, just because I, did things more efficiently.

This is complicated.

Of course.

If, you are not super familiar with this, or even if you're super familiar, but sometimes it's just, you don't know how long these tasks are gonna take, you know and, you make the wrong call, you sometimes you just have to trust your gut.

So this is not like this is not a complete science it's a lot of finicky sort of guesswork and, it might go wrong.

So this is not my favorite model of computation.

I'm sorry, through all the thread like fans over there.

But let me introduce you to another one.

So let's take a deeper look into our tasks, right?

We, are back to the basics.

We have A, B, C and D all sort of operating sequentially.

And if you see deep down.

What is that?

Now all of our tasks are composed of subtasks, right.

And, they're not contigously composed of subtasks.

So let me give you like a, sort of example that makes sense.

Let's think that we have an async function called 'do' that takes some arguments and then it, fetches something.

So it waits for the network to respond and then it does some operation f1.

Then it fetches something else and then does another operation then fetches something else and then returns that value.

If you think about it, this function is basically wait, do, wait, do and wait.

And then of course do at the end, which is returning, but it's not a whole lot of computation.

The idea is that it's during the life cycle of, our function, 'do' it's not only that should have been await fetch by the way But you get the point.

The idea is that it's not always continuously working.

It's not always continuously using the CPU juice, a lot of times just lazing around, waiting for things to happen, like the network or the disc to respond.

Right.

So as I just mentioned network calls are not the only sources of waiting time.

However since we are JavaScript developers a lot of us are writing Web applications.

They're certainly the most common ones that you'll encounter purely statistically, I suppose.

What if I did this weird thing.

Okay.

Let's say that I can only execute one task at a time.

But I, just kicked off all the tasks at once and then executed, whichever was ready-if more than one were ready, of course I could do them one at a time.

But if some were waiting, I wouldn't have to do anything.

So what if I did that?

So let's stack them up all on top of each other and let's say, so we start at the beginning and then, okay.

We get the first subtask, let's do that.

And then we move on and then the second subtask of, a totally different task, let's do that.

And then that, so, so we keep moving and, whatever's ready we focus on that.

So this is basically a task or event loop, which you might be familiar with.

It's an ingredient that is present in virtually every modern JavaScript runtime.

And this is during the course of this presentation, what we're gonna call the Web-like model of computation, so look out for that emerging, that case.

Moving on as we've just discussed, there's two models of concurrency, right?

There's the Web-like, and there's the thread-like.

Well what's, the difference, right?

Web-like is mostly composed of run to completion kind of mechanisms.

So things like workers, right?

So you, what happens when you create a worker, you start a worker and then you give it a certain amount of code to run and, then it runs to completion, right?

There's not a lot of cross mingling going on.

You just let the worker do stuff on its own more or less.

So, so that's sort of simpler to wrap your head around, I hope.

There's sometimes message passing happening between these different workers, something that you might achieve with postMessage in JavaScript.

There's async APIs.

So certain calls are asynchronous, like you know, calls that returns promises in JavaScript.

And the, most important part, at least for me, because I like my programming to be easy, is that by design, there, there are, no data races and there's complete data isolation, right?

If, you're just returning promises, if you're just using workers there's, no shared memory.

There's no need for synchronization, which means that you don't run into some of the classical problems of concurrent programming like the thread-like model.

Moving on, talking of the devil in the thread-like model, you have synchronous APIs and manual synchronization, right?

So there's a lot of these different synchronous tasks that are being executed all in parallel.

And then you have to synchronize between them manually for this JavaScript has atomics, which allows you to define atomic operations.

Then we have the concept of data races now because there's the idea of shared memory different threads can access shared memory through things like shared arrayBuffers.

And more or less, this is, a diagram.

It's a rudimentary diagram that I made, which tries to show what a Web-like system model looks like, right?

So there's these different event loops that might be working all together.

So, you know, you don't have to do only one thing at a time.

If you can do more things at a time, you can still use an event loop.

And each sort of running instance has its own memory and, they can sort of interact with each other somewhat.

And then there's the thread-like where you have these different, sort of parallel running execution threads and then each having their own memory, of course, and then some blocks of shared memory that they're each accessing but, not freely they have some locks here, those tiny emojis that I used.

But you get what, I mean.

The reality, unfortunately, JavaScript is quite a bit more complicated.

So we have, as I mentioned, a little bit of both and they're sort of just working together.

So how do we make sense of this and how we, how do we move forward?

Talking about the Web-like model.

It has its own goods and bads, of course the goods are that it's easy to use and easy to reason about I, I think virtually everybody who writes JavaScript uses this every single day and well, while promises can sometimes especially when you're starting out, be complicated to wrap your head around, I think that it's certainly easier than something like threads, which, I mean in, my case we had two semesters worth of classes at university and, that's not ideal, is it?

I mean so yeah, so one of the best features for me, at leasrt for Web-like, is that it's easy to comprehend and it's easy to work with.

At the same time some of the reasons why that is, is that it's causal.

It things happen in a certain way that you can make sense of.

And that, as I mentioned just a little bit ago it is data race free by construction, which means that unless you, put in some contraptions, things should be fine.

And at the same time, there's isolation, it's more or less the same thing.

Another thing is that generally with the class of applications that we usually build with a lot of wait times and so on asynchronicity generally corresponds to smoother applications and smoother experiences and, generally a lot of us build interfaces.

So asynchronous interfaces, I can assure you are smoother than interfaces that rely on other concurrency models because of the nature of asynchronous programming.

It's also I guess this could be rapid the first less focused on manual synchronization mechanics.

So you don't need to learn a lot about locks and queues and semaphores and mutexes, which are also locks.

But you get the point-it's, it's a lot of this complicated stuff you don't need to think about it.

Give focus on building your application.

There are bads, however which is that leaves performance on the table.

It, lets the the, event loop implementation in most cases decide what's best.

And, that's that.

And now, of course, if you're an expert who spent years learning how to optimize code and, write parallel programming you might be able to minmax your way out of this and, essentially build a application that is more performant than something that uses the basic event loop.

That's a possibility, but it's, very difficult.

So, more or less.

On the thread-like side there's the goods.

So of course the, one of the biggest good for me at least is the Web assembly interop.

Now if you're not familiar, TL;DR Web assembly has support for lightweight threads, which is great for Web assembly and, certainly makes sense for them more than it it makes for us, for sure.

But the important part is that, you know, when you write threaded code on Web assembly and when you're trying to interoperate on the JavaScript side, it makes sense, right?

Like conceptually it makes sense to, to write threaded code and it, interops altogether.

Not only does it interop well with Web assembly, but more specifically it interoperates well with WasmGC.

Now, WasmGC, again, TLDR is a new sort of proposal.

I think it's beyond just a proposal now in Web Assembly it's coming to a reality, but it's it, exposes a lot of very interesting things for the garbage collection and, that is also assisted with the thread-like model.

So I, think that those are sort of it's, two biggest benefits, right?

Another one is, as I mentioned in the last right, the pros of one might be the cons of the other is good performance.

Thread-like if you try hard, it can be quite performant and, you can ,minmax your way out of every single optimization problem, if you worked hard enough.

And you can believe that in, some of the biggest engineering powerhouses, people are doing exactly that.

Bads, however, as I said sort of translating from the other side-it's hard to reason and use which doesn't apply for the Web-like model.

It also relies on manual synchronization, so every single operation that you do in the shared space outside of your own isolate, it needs to be synchronized manually using tools like locks and, so on.

There's also the possibility of data races, which you need to, avoid it's a science in of itself.

It's something that people take long courses on.

It's not so easy.

And I, speak this from experience.

At the same time there's from time to time acausal astonishment.

So things can sort of happen out of order.

All of these combines sort of create a effect that I, like to call "must be this tall" which I'll show you in, just a bit creates an effect where people feel that they're not quite ready to, work with this model yet.

And it also exposes more timing channels, although with spectre and meltdown and all of those that's more or less water under the bridge at this point, but yeah, just FYI.

Yeah, this is what I meant when I said it creates a "must be this tall" kind of effect.

So yeah, it certainly makes you feel that way.

For sure.

Now.

All of this said TC39 has an interesting problem.

Right?

Should we focus on the Web-like side, which totally exists?

Or we focus on the thread-light side, which also exists in JavaScript.

And we have an interesting solution to this, which is let's focus on both of them.

So let's, break this initiative down in phases.

What are we gonna do?

And when are we gonna do it?

So on the Web-like side, what do we need for the first initial phase?

What's, the base basic minimum things that we ought to do first is that we need language support for asynchronous communication.

And, secondly, we need the ability to spawn arbitrary units of computation.

And, hold yourself right there.

So I'll reveal a bit more about these in, the next slide.

But yeah, these are the minimum set of items that we need in order to build asynchronous applications.

On the thread-like side, we need something to have shared memory.

We need a basic synchronization primitive to do synchronization on top of that, and we need the ability to spawn threads.

Of course there, there cannot be threads without threads.

Right.

Now if you paid attention to the last slide, you'll realize we're basically done here.

Everything that I talked about in the first phase is done.

We have promises, we have a async/await which makes everything perfectly ergonomic.

We have workers which can be threads of execution.

We have SharedArrayBuffer, which is shared memory and Atomics, which can help you do synchronisation.

So let's move on.

Let's not spend too much time in the past and, let's move on to the phase two.

Now on phase two, on Web-like we are, trying to solve the problem of ergonomic and performant data transfer and the problem of ergonomic and performant code transfer.

Well you know, we'll get deeper into that, but on the thread-like side we would require higher level objects that allow concurrent access.

Because right now everything's a mess.

We have normal objects that are a mess.

And we need higher level synchronization mechanisms.

So all the pain points that we have right now are sort of composed into this one slide, if you think.

Yeah.

As I said, it's designed to address the biggest observed pain points, this time on the Web-like side.

So transferring data in JavaScript when you're utilizing this Web-like model is expensive, right?

Transferables the, kind of things that you can transfer across different contexts is very limited.

There's always the weird reparenting of prototype when things do get transferred.

So if you rely a lot of project on, prototypes, it's not so easy to transfer them around.

Often I've seen this happen over and over again.

And, I don't blame developers, but often they would copy like deep copy objects on, one side to the other.

And, this of course is probably one of the big reasons why there's this weird reparenting of prototypes that is happening.

Transferring data is also unergonomic.

I mean, yeah, it is expensive, but it's also not so easy to do.

It often requires you to serialize your entire object like JSON or, and then deserialize it on the other end, which is like not the most ideal way to do things.

This results in identity discontinuity, which means that if you take thing from one realm and throw it into another and take it back, it's a completely different thing now.

So the identity of objects no longer holds.

Transferring code, however, like the biggest elephant in the room, I talked a lot about data, but transferring code is basically impossible.

We transfer strings and we, we throw them into execs and that's basically it.

That's not great.

Is it like, yeah, we shouldn't call it a day.

We should focus in more on that.

So, to fix that a proposal that's, something we're working on right now is module blocks.

It aims to solve the problem of ergonomic sharing of code.

It is spearheaded by Surma who works at Shopify.

And it looks something like this.

So if you notice now you have like a module block, like you literally say module and then you start a block.

And all the code inside of that is, is a module in itself.

And you can assign that to a variable and you can then throw that variable around.

You're essentially transferring code between different parts of your, or of your code?

Well, you can import that block and, run that.

You can call it inside of a Realm if, you don't know what that means, follow up on the shadow realms proposal.

It's awesome.

But yeah, so module blocks.

It fixes one of the problems.

Another upcoming proposal that we have in the pipeline it's is called shared disjoint heaps.

So it's sort of more of a concept until now, but it aims to solve the problem of both ergonomic and performant sharing of data as well as code.

The idea is that it allows you-so right now in JavaScript, you're not mindful, when you create variables and so on, you assume that there's one heap where these things are happening.

Right.

And of course if you run a different process, running JavaScript code, or like open another tab, maybe that's on another heap, but these are not working together.

In thanks to this proposal, you should be able to separate your heap into different, like composable heaps and the agent-local heap the, main heap that you that's basically the only one that you have access to right now can point into these distinct sort of shareable heaps.

And the shareable heaps cannot point to the agent-local heaps, of course, because it's you know they, cannot depend on them.

But the unit of sharing then would be these transferable heaps.

Right?

You can have one heap that, that runs some code and that contains some data and they can get through them around.

And that, that would work.

On the threading side.

Let's see what we have.

Well, surprise, surprise is also designed to address the biggest observed pain points.

So first of all, addressing the elephant in the room, nobody knows how to use SharedArrayBuffers and Atomics.

Well, I've been giving this talk in in-person conferences-I cannot ask you to do a raise of hands, but a show of hands.

But I, I, Well, tell me if you know how to use either of these well on Twitter and, we'll have a chat.

But I, feel that people don't know how to use them.

Well, at least I don't.

And, some of the folks who design them are, equally clueless as I am.

And it's not their fault.

I mean, these are difficult to use features because the impedance mismatch is quite high and, they're not exactly the simplest to wrap their heads around.

Well, that's something that we need to fix.

And a proposal to deal with some of those problems is 'structs'.

So it aims to solve the problem of higher level objects that allow concurrent access.

As I mentioned right now, we just have raw objects, which are a mess when you do them with threaded access.

This is spear headed by Shu, and it looks something like this.

So you can start your classes with the struct keyword, and this will create a struct instead of a, normal object, which means that it's sealed and you cannot extend the prototype and so on.

Okay.

So it's been a lot, but bear with me here.

With all that done.

What does the future phase look like after we're done doing what we're doing right now?

What do we have planned for the future?

What can we do that can help things further?

Well, on the Web-like side, I think that we can have a lighter-weight actors.

I feel that the cost of spinning up workers is quite high, which means that you cannot have the paradigm where you can have a lot of really tiny tasks and you can just spin up a new worker, do that, and then spin up another and do that.

The, cost is too high.

Now of course, people are working on fixing that in many ways, but, with some success it's, not always implied.

So maybe we can have lighter weight actors which would make people's lives easier.

We can also provide more integration with scheduling APIs.

Maybe you can have some insight into how the event loop is doing.

And so on.

This is a, more advanced feature.

And some that people feel is not quite JavaScript-y and it's too much, but let's say that it's certainly a possibility.

We can also have a concurrent standard library.

We could have a standard library that pretends promises and so on.

On the threat-like side, we could have better tooling, that helps people develop these kinds of applications with more confidence I think.

We can have tighter integration with WasmGC, although that's mostly a given at this point I, think we could always do better.

Right.

And again, a concurrent standard library.

Maybe that's the need of the ?? I don't know.

You tell me But yeah, before I finish off, I'd like to give special thanks to Daniel Ehrenberg Shu-yu Guo, both of whom are, have been very helpful through the course of my work, as well as this presentation, as well as the organizers.

I know I'm quite difficult, but thanks for bearing with me and thanks for inviting me.

It's been an absolute pleasure to be there and be here.

And with that, thank you very much.

Multicore JavaScript 🚀

Past, Present and Future WebDirections Global Scope

@ryzokuken

Photoshopped image of Ujjwal bestowing himself a medal.

Ujjwal Sharma (@ryzokuken)

  • Compilers Hacker at Igalia
  • TC39 Co-chairperson
  • ECMA-402 Co-editor
  • Node.js Core Collaborator
  • From New Delhi, India
  • Live in A Coruña, Spain
  • I love dogs 🐶

IGALIA 💖 BLOOMBERG 💖 GOOGLE

JavaScript is ancient

Photoshopped image of a dinosar at a 198s style PC with the cover of "JavaScript the Good Parts" on the screen.

Photoshopped image of Moses holding aloft a stone tablet on which reads "ship decorators please"

JavaScript was conceived in December 1995

  • Intel Pentium Pro
  • Max clock rates upto 200 MHz
  • 1 core
  • AMD Am5x86
  • Max clock rates upto 160 MHz ● 1 core

photos of old computer chips

Aged photo of two young boys sitting at a computer.

Today’s computers are very different

  • Samsung Galaxy S23
  • 8 cores
    • 1 x Kryo Prime
    • 3 x Kryo Gold
    • 4 x Kryo Silver
  • Newer Macbooks
  • 10 cores
    • 8 x Firestorm
    • 2 x Icestorm

"Retro Wave" style text reads "Lets talk about cryptocurrency"

"Retro Wave" style text reads "Lets talk about concurrency"

Each task takes some time to perform

Same chart, now the bars are labelled a, b, c, and d

I could do them all as you’d expect

Bars are now rotated 90 degrees clockwise, and places end to end. Beneath is the text "= a +b +c + d"

But if they were independent of each other, I could try something else...

Same as previous slide but text below is now a question mark.

If I could perform multiple tasks simultaneously, that would make things faster

the longer two bars, labelled b and d are now laid end to end, and below, bars laid a and c. Below is the text "= max(a + c, b + d)".

If I could perform multiple tasks simultaneously, that would make things faster

🧵

The same illustration from the previous slide, with a vertical line extending from the right edge of the top row. Text below reads "= max(a + c, b + d)"

I could try to make things more efficient if I tried

🧵

The bars are now arranged c and d in the top row and a and b below, so they difference in widths is less, and overall the process takes less. Text below reads "= max(a + b, c + d)"

This is complicated!

Let’s take a deeper look into our tasks...

Repeat of the 4 bars rotated and laid horizontally end to end, Text below reads "= a + b + c + d".

Let’s take a deeper look into our tasks...

The 4 bars from the previous slides are now unfilled. Inside are rthin rectangles the height of the bar represnting where the work of the task occurs. Text below reads "WHAT"

async function do(x, y, z)
{
    const a = fetch(x);
    const b = f1(a, y);
    const c = fetch(b);
    const d = f2(c, z);
    return fetch(d);
}
async function do(x, y, z) {    
	const a = fetch(x); // ⌚
	const b = f1(a, y); // 🐎 
	const c = fetch(b); // ⌚
	const d = f2(c, z); // 🐎
	return fetch(d); //⌚
}

Network calls aren’t the only sources of ⌚ But they’re certainly the most common

What if I kicked off all tasks at once and then executed whichever was ready?

The four bars, rotated 90 degrees, and largely unfilled are now stacked on top of each other. Again smaller rectangles inside show where the work of the task actually occurs. An arrow points to the start of the first taks, then the start of the second taks then the start of the third.

This is an task/event loop!

Two models of concurrency ♻

Web-like 🕸

  • Run-to-completion (Workers)
  • Message-passing (postMessage)
  • Async APIs (Promises)
  • No data races, data isolation

Thread-like 🧵

  • Sync APIs, manual synchronization (Atomics)
  • Data races, shared memory (SABs)

Web-like 🕸

The word "memory" appears inside a stylised cloud three times. Below each is an icon, of two arrows pointing in the same direction forming a circle.

Thread-like 🧵

A much more complex image. The word "memory" again appears inside a stylised cloud three times. At the bottom are two stylised clouds labelled "shared memory". Arrows point from the left memory cloud to the left shared memory. From the right memory cloud to the right shared memory and the middle memory cloud to both shared memory cloud. Between the memory and shared memory clouds are 3 triangles labelled execution. A lock icon appears above each of the shared memory clouds.

Reality

An illustration combining both the web like and thread like images.

Web-like

🕸 Goods

  • Ease of reasoning + using
    • Causal
    • Data race free by construction
    • Isolation
  • Asynchronous = smoother
  • Less focused on manual synchronization mechanics (locks, queues, etc)

Bads

  • Leaving performance on the table

Thread-like 🧵

Goods

  • WebAssembly interop
  • WasmGC interop
  • Good performance

Bads

  • Hard to reason & use
  • Manual synchronization
  • Data races
  • Acausal astonishments
  • "Must be this tall"
  • Exposes more timing channels
@ryzokuken

Photo of a balding man standing at a computer from behind. On the wall far above his head a sign reads "must be this tall to write multi-threaded code"

Two button meme image. Top panel shows buton on the left labelled "focus on web-like" and right "focus on thread-like". Bottom panel of sweating mand is labelled "TC39".

Top panel now shows pushing both buttons at the same time. Bottom panel shows the man now smiling with a thumbs up.

Phase 1

Web-like 🕸

  • Language support for asynchronous communication
  • Ability to spawn units of computation

Thread-like 🧵

  • Shared memory
  • Basic synchronization primitive
  • Ability to spawn threads

Phase 1

Actually, we’re done here ✅

  • Promises
  • async/await
  • Workers
  • SharedArrayBuffer
  • Atomics

Phase 2

Web-like 🕸

  • Ergonomic and performant data transfer
  • Ergonomic and performant code transfer

Thread-like 🧵

  • Higher level objects that allow concurrent access
  • Higher level synchronization mechanisms

Phase 2 🕸

Designed to address biggest observed pain points

  • Transferring data is expensive:
    • Transferrables very limited
    • Weird reparenting of [[Prototype]] even when transferred
    • Often copied
  • Transferring data is unergonomic:
    • Often requires serialization/deserialization
    • Identity discontinuity
  • Transferring code is basically not possible, we transfer strings

Proposal: Module Blocks

  • Aims to solve: Ergonomic sharing of code
  • Spearheaded by Surma
let moduleBlock = module {
	export let y = 1;
};

let moduleExports = await
	import(moduleBlock);
	assert(moduleExports.y === 1);

assert(await import(moduleBlock) === moduleExports);

Upcoming Proposal: Shared Disjoint Heaps

  • Aims to solve: Ergonomic and performant sharing of data and code
  • Let developers separate their heap
  • Agent-local heaps can point into shareable heaps
  • Shareable heaps cannot point into agent-local heaps
  • Unit of sharing is transferable heap

Phase 2 🧵

Also designed to address biggest observed pain points

  • Nobody knows how to use SABs and Atomics well
  • Impedance mismatch too high

Proposal: Structs

  • Aims to solve: Higher-level objects that allow concurrent access
  • Spearheaded by Shu-yu Guo
struct class Box {
	constructor(x) { this.x = x; }
	x;
}

let box = new Box();
box.x = 42;

          // x is declared
assertThrows(() => { box.y = 8.8;
});       // structs are sealed
assertThrows(() => { box.__proto__
= {} }); // structs are sealed

Future Phase

Web-like 🕸

  • Lighter-weight actors?
  • Integration with scheduling APIs
  • Concurrent std lib?

Thread-like 🧵

  • Tooling?
  • Integration with WasmGC
  • Concurrent std lib?
Special Thanks 🙇
  • Dan Ehrenberg
  • Shu-yu Guo
  • Organizers

Ta!🙏

@ryzokuken