WebAssembly – To the browser and beyond!

– Thank you, Tim.

Hello, welcome.

Hello, Amsterdam.

Firstly, thank you for that introduction, Tim. And thank you to the organizers for inviting me to be here in this wonderful city and share this stage with so many other wonderful humans. I’m honored and privileged to be here.

My name’s Patrick Hamann.

As Tim said, I’m a principal engineer at Fastly. And you can catch me on the Twitters there, @patrickhamann. Although, I might stress I’ve actually taken a sabbatical off Twitter for the last year.

I encourage you all to do it.

It’s amazing.

It’s very healthy.

But I’m back, and I’m gonna start jumping back in on the conversation there.

But please come and talk to me in the break or at the party later, about anything I talk about today, or just tell me what you’re up to.

I really love to hear what people are doing in this space. I work at Fastly.

We’re an edge cloud platform that the world’s largest brands use to deliver their content as fast as possible. And at Fastly I work in a team called the office of the CTO. It’s an emerging technologies group.

We’re inside the engineering organization.

And I want to talk to you about one of my newly found passions as part of this team that we’ve been looking at for the last two years now. And that’s WebAssembly.

I’m gonna discuss why we even need WebAssembly, what is it, how you can use it today.

We’ll look at some practical use cases that people are already doing and the benefits, the speed, and the performance benefits you’ll get from this. And finally we’ll look at how WebAssembly can be used actually outside of the browser, which is an amazing thing, and what the future holds for it, especially with the new standards and specifications that are coming.

Before we discuss what WebAssembly is we need to take a step back and look at the history, and look why does it exist.

And so, let’s talk about the Web platform that you all know and love.

For the last 20 years, the Web platform has largely consisted of these three technologies.

HTML, CSS, and JavaScript.

Initially we had HTML that was created by researchers at CERN as a way to share information and documents by way of hyperlink.

Then we realized that we need to separate that style from the documents themselves, and we had CSS. And then finally, as web adoption increased, and interactivity and the dynamic nature of the Web became more demanding, we introduced JavaScript. We realized that we needed a glue language, as such, that was easy to use by programmers, could be embedded withinside a web browser, the runtime could be embedded, and even the scripts themselves could actually be embedded within markup, withinside HTML. And I think it’s a true testament to the design of these languages that they have lasted so long and held up against the test of time.

Occasionally we’ve had romantic flings to Adobe Flash and Java applets that have tried to change how we program on the Web, but they didn’t last, and there are reasons why that happened.

But these three technologies have remained the same. It’s truly amazing.

Later, with the birth of Ajax we could request fragments of the documents, even just data payloads, JSON payloads.

And our servers could stitch…

Sorry, our clients could stitch these together into the DOM, dynamically, and interactivity just dramatically increased. And so, with the likes of React and JavaScript frameworks and single-page web applications, most of the heavy lifting is now happening on the client. We’ve moved from that model of thick server, thin client, to thick client, thin server.

We’re doing more and more compute now on the client. Some of you may think that’s a good thing.

Some of you may think that’s a bad thing.

But we’re definitely seeing this paradigm change, this shift in paradigm.

But is that really the best model at all? Or rather, is sending megabytes of JavaScript to the client really the most efficient use of those resources? Especially when mobile devices vary dramatically in their compute performance. Or is the client the best place to run that logic? Or there are some things that we do on the server today that actually should be run on the client to reduce latency, if the client has that compute power.

JavaScript was designed to be easy to use, safe, high-level programming language.

What I mean by a high-level programming language is it’s a language that can’t directly access memory, whereas a low-level language is a type that can actually have impact on the underlying hardware and the physical memory that we write to.

But it also means that, because it’s a low-level language, we can’t optimize that language to be efficient on the hardware that we’re running it on. And again, this is really important when we have low-end devices that the next billion are coming online to use. And it’s written and distributed in plain text. That means it’s interpreted in the browser. It’s an interpreted language that uses just-in-time compilation, so every time we load a new JavaScript file we have to compile it on the fly in the browser. And it’s only a single ecosystem.

Now, don’t get me wrong.

It’s an extremely large ecosystem.

In fact, it’s the largest programming language ecosystem in the world. But we can’t benefit from tools that have been written to solve problems in other languages that are well hardened and have been around for many, many years.

And so, that slide seemed like I was hating on JavaScript a bit too much, which I’m not.

I traditionally am a JavaScript developer.

It is the most ubiquitous programming language in the world. And the Web has succeeded because of JavaScript, and JavaScript has succeeded because of the Web. But the problem is that, on the Web, on the Web platform, all we have is JavaScript.

You don’t have a choice to drop down to a lower level language if you want to.

So if I’m programming on the server I could just, say, use a program like Ruby. It’s a high-level language, but it’s productive. It allows me and my developers to get things done quickly.

But when we need to drop down to a lower level language like C or C++, when the problem needs to be critical and run as fast as possible, we can.

We have that freedom on the server.

But on the Web platform, we don’t have that freedom to drop down to a lower level language, if and when we need to, for certain performance-critical operations. So what can we do about this? We could implement the C++ runtime or the Go runtime in web browsers.

But this would be, one, a security nightmare. Those languages have access to memory.

I don’t want to allow any third party that I download inside my browser, or my customers download, to access my memory directly.

That would be a security nightmare.

It’d be a maintenance nightmare.

I’d have to remember that, oh, Chrome 74 ships with the Go 1.4 runtime. Again, that matrix of possibilities would just be too much. It just doesn’t scale.

And so, this is where a team at Mozilla started to think about how we can solve this problem. And they created a tool called Emscripten.

Many of you might have heard of it.

It’s a C to C++ JavaScript compiler.

Or initially it was a JavaScript compiler.

And this allows anyone to compile a C program to JavaScript, to just raw JavaScript.

The problem with this is that it shipped with a really bloated runtime because they had to recreate all these system calls like the file system, or making network requests, as a JavaScript wrapper around this.

The bundles that this produced were in 10s of megabytes, especially if you were compiling a large C program. But they started to notice patterns in the compiled output of that JavaScript, especially when it came to managing numbers, and arrays, and memory, and bytes in memory.

And they created a subset of JavaScript called asm.js, which you might have heard of.

And what this allowed to do is that they could optimize their JavaScript engine to run that code extremely efficiently if it was only just managing number types and bytes in memory.

And the really interesting thing here is that when they were compiling the C program the memory space that it used, they said, okay, we’re gonna allocate memory as a JavaScript array buffer, and that’s it. It’s gonna be a fixed size.

And that meant that the C program couldn’t then access memory outside of that. It literally was just a typed array buffer in JavaScript. And so other browser vendors started noticing asm.js, and they started to optimize their own JavaScript compilers to be able to run this code extremely, extremely efficiently.

And this is where WebAssembly comes in.

All we had was JavaScript.

And asm.js, all we could do is, even though it was a subset of JavaScript, we could only optimize it so much.

It still had to be delivered via plain text in the browser and interpreted and parsed, and that compute still takes a lot of time. So on 2015, the WebAssembly Working Group was formed and the whitepaper was released.

And it came out of this work that Mozilla did, and asm.js did, and Google did, on a program called NaCl, which is Native Client. And it has all the things that you’d expect from a standard on the Web platform now.

It has a working group, it has standards.

You can go and look at them on W3C, et cetera. Hopefully now we’ve got an understanding of how WebAssembly came to exist.

Let’s look at what it actually is.

And many people just think it’s C++ for the browser, and it definitely isn’t that, and it’s so much more. Let’s talk about what it isn’t first.

It’s not a programming language.

You don’t actually write it by hand.

It’s a compilation target.

So you write other host languages that compile to that. But I like to think of it as just another tool in our toolbox that we can use to solve certain problems on the Web platform and extend the Web platform beyond the capabilities that it had before. So it’s a compilation target.

It’s for other languages to compile to.

You don’t write it yourself.

You write a host language, say Rust, or Go, or C++, and then you compile that language to it.

If we were to look at the WebAssembly website this is literally what it says.

It says, “WebAssembly, abbreviated Wasm,” which you might hear me say throughout the rest of this talk, “is a binary instruction format “for a stack-based virtual machine.” Right now you’re probably thinking, or you might be thinking, “W-T-F, Patrick, what?” And I definitely did the first time I started learning about it.

Let’s pick that apart.

A binary instruction format.

This means it is a set of instructions for a machine to process that has been encoded in a binary format.

Unlike JavaScript, it’s not plain text.

It’s encoded in binary.

And because of this binary format it means that we can natively decode, in the runtime in the browser, much faster than we can decode JavaScript.

The initial experiments that Mozilla did for the WebAssembly whitepaper prove that WebAssembly can be decoded up to 20 times faster than JavaScript can, in a browser environment. And especially when we’re shipping large amounts of megabytes down to a mobile client, where sometimes we’re seeing JavaScript that’s taking up to 20 seconds on a low-end Android device to parse and execute, by being able to speed up some of that to 20 times, you can see the performance benefits that we’re gonna get from this.

Especially when combined with techniques like streaming compilation, so we can compile it as the bytes come in down the wire, and better than gzip compression, you’ve now got a format, a binary instruction format that is designed to be ran extremely fast within web browsers.

It’s a stack-based virtual machine.

There’s very different…

There’s a couple of different ways that you can represent a machine like a CPU. One of those is stack-based, and the other is register-based.

The CPU in your device right now is register-based. But you don’t really have to worry about this. What’s more important is that it’s a virtual machine. It’s a processor that doesn’t actually exist. We’re not writing the WebAssembly to run on a specific processor, an x86 or ARM, some specific hardware.

We just need to compile it for this virtual machine, much like the JVM, the Java Virtual Machine, if you know about that.

And the cool thing about this is that the compilers, the languages, Rust or C++, don’t have to know about how the hardware works. They all can target this one virtual machine. And that makes it really portable, which is one of its greatest benefits.

WebAssembly’s spec is pretty simple.

It’s like this.

This is on the W3C website.

It only knows about numbers, and bytes, and integers, floats, names in memory.

That is the type system.

It is that small.

And it also has operations and instructions that you can operate on those types, so like add, subtract, multiply.

But the really interesting thing is that actually a lot of very complex programs written in these low-level languages can be reduced down to just this set of numbers in memory, and then moving those numbers around in memory. Most software that we write actually boils down to that. So when you write some code in a low-level language, normally the compiler will compile it to something that’s known as an IR, an intermediate representation.

And then that’s the same regardless of the system architecture that you’re compiling to. But then all these compilers then have to have backends for each target that you’re targeting.

So ARM for mobile devices, x86, which is Intel’s assembly instructions that maybe many of your laptops that you’ve got here in the audience are gonna be using. And so, when you do that, as a developer in one of those languages you’ve gotta compile all your source code to six all different instructions.

When you go and download a binary, you have to go, “Ah, I need to find the Darwin x86 one “for my MacBook Pro that I’m running here.” But with WebAssembly you don’t need to worry about that. The code just gets compiled to the WebAssembly’s virtual machine instructions, which is normally shipped around in a .wasm binary file. And so the runtime then takes care of how that is converted to assembly on the machines that you’re running.

And because the WebAssembly instruction set has got that really small API that we just saw, it’s really easy for host languages to target WebAssembly. So some of the compilers, it only took them months to integrate WebAssembly. And we’re gonna see that adoption increase dramatically over the next five years.

For it to succeed, WebAssembly had a few guiding principles to ensure that we solved the problems that it was set out to do that I outlined in that first section.

Firstly, it’s gotta be a compilation target, we’ve just discussed what that means, so other language’s toolchains can target it. It has to be fast to execute, and compact.

And this is really, really important when we’re talking about the Web platform.

When we’re shipping native code they’re normally quite large files.

In fact, a lot of programs are already preinstalled on your computers. But on the Web we don’t have that benefit.

We can’t benefit from any precompiled programs, and so we’ve gotta recompile them every time in the browser. So it has to be as compact and as fast to execute as possible if we’re gonna be able to ship native applications to the browser.

And it’s got a linear memory model.

And this is what I’ve discussed about how Emscripten works, about only you’re supplying the program with a specific slice of memory and that’s the only thing it’s allowed to operate in. And this is one of the most fundamental points about WebAssembly, is that it has a memory sandbox and so you, when you instantiate this program, are only allowed to use this array of memory, and it starts here, and it ends here.

And so the program, it’s physically impossible for it to go outside of that.

And this makes it really, really powerful and also makes it really easy now for programs to be compiled to run safely.

That’s the most important thing of untrusted native code in your browsers.

To understand what’s happening here I think it’s actually better to just visualize it, the relationship between the source language, WebAssembly, and then the instructions that they then actually get converted to by the browser’s runtime.

Here we’ve got a simple…

This is a tool called WasmExplorer made by Michael Bebenita.

He’s on the WebAssembly team at Mozilla.

You can check it out there.

This is a really simple C program.

It adds A to B.

A and B are integers.

You can see here, this is the stack machine that WebAssembly has created for it.

We’ve got an int, i32, and we’re calling the add instruction on it. And then you can see directly how this relates to x86 assembly in the browser. You can see that add operation.

I’ve also lied to you.

I said that WebAssembly is a binary format, and this is text.

How is that even possible? This is called WAT, which is the WebAssembly Text format. And then assembly is the text representation of the binary machine code.

And so you can see, for the first time ever, we now, if we want to, can have direct control on the operations that a program in our browser is running on the hardware. Which, for me, is amazing.

You might be asking what the support is like for one of these.

The most exciting thing about WebAssembly now is that it’s supported in all major browsers, it’s shipped. This is the first language, since JavaScript shipped 20 years ago now, for the Web platform.

For me, that is amazing.

It’s across the entire Web platform now, so you can use it. It’s over 88%, we can see over there.

88.39.

To summarize, WebAssembly is a new language for the Web. It’s compiled from other languages that gives us native speed, consistency, and reliability in the browser.

And I’m gonna talk about that reliability a bit later. And it’s the first time ever that we have a portable representation of native programs that we can ship anywhere and run safely in the browser. And for me, this is amazing.

We now have a better understanding of why WebAssembly came to exist, and maybe an understanding of what it actually is. Let’s look at how we can use it.

We’ve got a new collection of JavaScript APIs that hang off the DOM that all are under the namespace WebAssembly. You can invoke the exported functions that your WebAssembly module exports.

Here this is exporting a function called fibonacci, and I’m passing 42 into it.

This is the preferred way of doing it now.

Before, you had to synchronously load it, but here we can do the instantiateStreaming. It uses the fetch API, which returns a promise. And the cool thing about this, with instantiateStreaming it means that as the bytes are coming off the wire we can start to compile that program.

Which means by the time the full Wasm file is loaded we’ve actually already compiled it and we can then load it instantly.

And we don’t get this benefit from JavaScript. In fact, Firefox can now compile your WebAssembly binaries faster than the bytes come off down the wire, which is incredible.

And if we combine that with things like implicit HTTP caching…

Because the difference between a JavaScript file and a WebAssembly file is that the WebAssembly will always compile to the same machine code regardless of the input data.

Whereas JavaScript, you can do that, it’s tricky. JavaScript compilers are extremely intelligent things. But they have to optimize that, and they can’t then cache the compilation.

But with WebAssembly, the first time that I’ve compiled it the browser can then store that compiled asset in the HTTP cache so the next time that file is requested, it runs instantly. No downloading, no compilation.

Instant, very fast, reliable performance.

Let’s actually look at a genuine use case.

Let’s imagine that I am GitHub, or a news organization, and I want to accept comments in the form of Markdown on my website.

I’ve loaded a Markdown parser from npm, I npm installed it, and it’s doing the job for me well.

But I profiled my application, and I saw that the hot path of my application was when this JavaScript was compiling the Markdown into HTML.

Because JavaScript wasn’t really designed to be a tokenizer and a compiler, and we’re having to write a lot of code to do that. But I know that I have a Rust library that is very efficient and solves this problem, and in a type-safe way, and I wanna take advantage of that. To do this, we’ve made a little demo called markdown.fastlylabs.com.

Let’s have a look at the code.

I’ve chosen this example because it shows that you don’t even need to know Rust that much to take advantage of preexisting solutions to problems that you may have.

Firstly, I’m importing a library called pulldown-cmark, which is our Markdown parser, and the HTML and parser functions from that. I’m exporting a public function called render that accepts a string and returns a string. It accepts our input string, which is the Markdown, and it’s gonna return the HTML.

I then invoke the parser.

I create a new HTML output string, and I write the HTML output to that string and then I return it.

It’s eight lines of Rust.

But…

Oh, sorry.

What you might have noticed here, I’ve also imported something else called wasm-bindgen, and I’ve used a Rust macro to decorate my function here. Why am I doing this? This tells the compiler that it needs to also generate the JavaScript bindings to be able to invoke this WebAssembly module. Why do we even need to do that? If we remember from the previous section, WebAssembly only supports numbers and bytes, arrays, types. It doesn’t actually have a string type.

You can’t just pass it the Markdown string. And so this is where we need to create some JS glue code that when you give it a string it converts that string into a byte array and then passes that byte array, or the pointer in memory, to the WebAssembly module.

But that’s a really annoying developer experience for me. I have to remember how to convert a JavaScript string every time I need to use it.

And this is where wasm-bindgen comes in.

Just by doing this it’s gonna automate writing that JS glue code for me and just expose my Wasm module and the loading in the browser just as a normal JavaScript function that I can just import.

And so I can continue to program at this higher level, and just pass strings and accept strings and not have to worry about how that data is getting converted to numbers and memory under the hood. This is actually gonna be replaced by a proposal called interface types, which I’m gonna discuss later.

The next tool we’re gonna use is something called wasm-pack. This is much like webpack but for Wasm.

It’s made by Ashley Williams and the Rust WebAssembly working group.

They’re doing some amazing work in this space to try and make Rust and WebAssembly more accessible to all of us as developers.

And so, when I run this wasm-pack build on that Rust source code that we just saw it will spit out a Wasm file.

So it’s compiled it for me.

I don’t even need to know how to compile Rust to Wasm. And a JavaScript file, which is that glue code that we mentioned.

It’s even nice enough to export some TypeScript type definition files for us as well.

Back in our JavaScript application all we need to do now is import that file that wasm-pack has exposed and get our input element, which is called markdown, and our output element, bind an event listener to the input.

Every time sometime types, we pass that value to our render function which is exposed by the JS glue code, and that’s gonna call the Wasm parser.

Then we write it to our output element.

Don’t worry too much about this code.

It’s in my slides, you can link to it.

But I think it’s easier if we look at the relationship side by side here that I’m exporting a public function in Rust and then I can just call render here and pass the string. This is how that JavaScript is calling into Rust via WebAssembly, but that’s gonna be compiled down to binary code and shipped, and run efficiently in the browser.

With eight lines of Rust and eight lines of JavaScript we’ve been able to replace this really hot path in my JavaScript application.

Let’s have a look at what that looks like.

The interesting thing here is that I haven’t done any video trickery.

I haven’t sped up the frames in this to make it look like it’s as fast as it is. This is literally how fast it is.

And I didn’t need to use a debounce or throttle function. I probably should have anyway, but I didn’t need to because it can accept the input that fast and return it to HTML that fast.

For me, this is mind blowing.

I think it’s pretty cool.

We know what WebAssembly is.

We look at one example of how we can use it in the browser. Let’s have a look at a couple of practical examples that people are using it for in the wild today, and some of the research that’s been coming out around this in the last year or two.

PSPDFKit is one of the most prolific PDF renderers on the Web.

Whenever you load a PDF inside your Dropbox account, or you sign a PDF on DocuSign, or you read a PDF that’s embedded within a news article in The Guardian, they’re all using this library called PSPDFKit to do that. Previously, prior to 2016, they only offered a native version that you could embed in your native applications to do this. But then they realized they needed a web UI. But when they first shipped it, what they did is that when you loaded up that embedded component on the web page it had to make a call back to the server.

The server had to render the PDF as an image, send that image back.

And so for a large PDF file with 20 files, that’s extremely inefficient and causing a lot of latency for the experience. But their core PDF renderer is actually written in C++. So what they could do is that they can compile their core renderer and ship that as WebAssembly to the browser and now have the PDF rendered directly in the browser, and you completely eliminate all of those requests and that latency to do that.

So I think it’s really cool.

Squoosh.app some of you may have seen.

It was released by the Chrome Developer Relations team this year at Google I/O.

It was an image compression tool especially for the performance of us in this audience. It’s actually a really cool online tool.

Think of it like Adobe Photoshop but in the browser, and you can tweak settings of an image to see how well it compresses before exporting it again. To do this, they compiled various native libraries such as mozjpeg or Google’s libwebp, from C++. So tools that weren’t even written to run in the browser now can run in the browser via Emscripten.

Then they just allow you to change the settings. And all of this is happening in realtime, which I think is amazing.

But the point here is that WebAssembly allowed them to polyfill the Web platform for features that don’t exist.

Yes, some browsers have support for WebP, for mozjpeg, but some of them don’t.

And so I can polyfill over things that don’t exist in the Web platform using WebAssembly, using native libraries that already exist out there. This is one of the key points that I want you to take away from this talk, is WebAssembly’s not here to kill JavaScript anytime soon. It’s here to augment the holes and the weaknesses that JavaScript wasn’t ever designed to do, or things that in the Web platform we’d not get round to standardizing.

We can use WebAssembly to polyfill the Web but do that secure and fast.

Part of that project, they did some research. This is for one of the functions, which was the image rotate function that you just saw in that previous video.

This is actually quite easy to do in JavaScript because with the Canvas API all you’re doing is flipping pixels in that array. They wrote that in JavaScript.

But then they wrote that same implementation in three other languages, C, AssemblyScript, and Rust. And they compiled those last three ones to WebAssembly. And then they benchmarked these across four browsers. And the results are really, really interesting. Point number one is that some JavaScript engines, I’ll let the user determine what you think those browsers might be, are very, very good.

Modern JavaScript engines are amazing, and they can, if you’re only working on memories and numbers, can optimize that code very, very well.

But some engines are much slower.

But then the really interesting thing is, regardless of the browser, the WebAssembly ones are consistently fast across the board. What this shows is that WebAssembly can be extremely useful when you need consistent performance across the board. Think about those hot critical points in your applications. I’m not telling you to use JavaScript, not at all. I’m saying think about the parts of your application that can be solved in another language and do that much faster, and you’ll get consistent performance across the board regardless of the browser.

eBay.

In the eBay mobile app, when you are a seller and you’re getting your listing ready, they have this feature in the native app that allows you to take a picture of a barcode. And the barcode scanner will work out the product you’re trying to sell and prepopulate all of the fields.

I think that’s a cool user experience.

I’m much more likely, as a seller, to sell something if I don’t have to type out all of the product details.

But they didn’t have this on the Web because the Web doesn’t have a native barcode scanner. Yes, you can do it with the Canvas API, taking a picture then reading all the pixels, but that’s gonna be a lot of work.

And so, what they did is…

Well, in fact, they did do that.

They have a JavaScript implementation.

Then they have this custom C++ one that they’re using in their mobile app already. And then they had an off-the-shelf open source tool that has been hardened and written for years called ZBar. They did two experiments.

The first one is that they A/B tested not having a scanner at all on the web device, and then having a scanner.

And we can see that just by having the scanner seller completion going from draft to a sold item increased by 30%, which really shows that that is a really good user experience. What’s even more cool is that they then A/B tested the JavaScript version of that barcode scanner on the Web, their own implementation, and then this open source implementation.

And the open source implementation that has obviously been contributed to by many people over the years, and is very optimized, contributed to over 50% of completions.

And to me, that shows two things.

One, obviously the WebAssembly performance way outperformed the JavaScript one.

But two, it shows that just using existing tools, open source ones, you don’t have to write them yourself, to solve your problems is a really, really good thing.

And WebAssembly allows us to do that now.

That was a few examples and genuine use cases in the wild. And hopefully it’s shown you a bit of the power of WebAssembly and given you some ideas of what you could do today. But I also hope that I showed you that WebAssembly isn’t gonna replace JavaScript anytime soon, but it’s here to augment its flaws.

Any part of your application that you’re currently doing on the server and that’s causing hot paths, think about that and see if you can port them to WebAssembly.

Such as encoding file formats, parallelizing of data, data visualization.

A lot of data visualization that we’re doing on the Web now is really intensive and taking a lot of data. This is a perfect use case for WebAssembly. And the list is endless.

So far we’ve looked at WebAssembly in the browser, but what about beyond the browser? It’s the title of my talk, after all.

What do I mean by this? We actually already have JavaScript beyond the browser. Node has got support for WebAssembly now because Node is based on Chromium’s open source JavaScript engine called V8, and so when V8 got support for WebAssembly so did Node, which is awesome.

That means that I can now use WebAssembly modules inside my server-side applications and Node can benefit from all those same benefits that the browser was doing, efficiently and safely.

And it’s this safety factor that I wanna talk a lot about. One of the problems with running native software, especially on our servers and in our browser, is the trust guarantees of it.

It’s capable of reading and writing to memory, and that’s extremely dangerous.

Which is why we go to great efforts in our production software on our servers to make sure that our software is isolated from other users on that same server.

Browsers handle this efficiently with their sandboxing model already, because of things like tabs.

JavaScript that’s running in one tab can’t access the memory of the JavaScript running in another tab.

And that’s an extremely, extremely useful thing. This is why certain CDN vendors have started to use V8 as their engine to process billions of requests a second using that isolation model, so you get the trust guarantees that V8 and the browser sandbox model has come in forever. So you can spin up a single instance of V8 and have that isolation model already, compared to, say, virtual machines that you use in AWS or GCP.

The memory and CPU overhead of those you just couldn’t do on every request inside a CDN. I actually commend this use of technology.

It’s an amazing, amazing use of technology. What’s interesting here is that because it’s V8 it means we can also run JavaScript and WebAssembly on the edge as well, which is awesome.

We learnt that WebAssembly’s design means that we get that memory isolation and trust encapsulation for free.

So maybe, actually, WebAssembly is really useful outside of the browser.

This is why Fastly’s taking that idea a bit further, and we’re not even using V8 as our WebAssembly runtime. What if we could just run the WebAssembly directly on the server processing millions of requests a second? And so we’ve been working on exactly this.

We’ve written a WebAssembly compiler that allows you to compile any code to WebAssembly and run that on the edge, but extremely fast. We can spin up a new instance of WebAssembly in under 35 microseconds.

And I need to always remind myself this is not milliseconds. This is microseconds.

Compared to V8, it takes around five milliseconds to spin up and 10s of megabytes for each isolate.

We can do this in 34 microseconds with only kilobytes overhead of memory.

And this is possible with Lucet, which is our open source sandboxing WebAssembly compiler. And this works by compiling the WebAssembly module ahead of time, so not using just-in-time compilation. So we can do that once and distribute that everywhere on the network and then instantiate the modules incredibly quickly, which we why we get those trust and speed guarantees. You can go and read about it here.

We’ve open sourced it.

Please go and use it in your own technology. What type of thing does this enable, having WebAssembly outside of the browser now? It means that, for instance, I could run a GraphQL server on the edge, taking advantage of HTTP caching within CDNs and talking to my origins.

But my origins, that same code, I could have a GraphQL server running over there. Or I could have part of the GraphQL server running on the client here, all from the same code because it’s WebAssembly everywhere.

But to do this one of the problems that you might have thought about is that WebAssembly can only operate on numbers and bytes in memory, so it can’t actually talk to the system.

It can’t make a network request.

It can’t read from the file system.

I’ve just said that I can run WebAssembly on the server. How can I make a network request inside my GraphQL application if WebAssembly can’t do that? What we need is an interface for WebAssembly to be able to talk to the hardware that it’s running on, or the runtime, to actually make network requests.

This is why Mozilla started to define what’s called the WebAssembly System Interface. And it’s a specification of standardized interface between the WebAssembly module and the runtime it’s talking to.

That allows the compiler to say, okay, when I wanna make a network request it’s gonna compile it down to that same interface function. And regardless of the runtime that’s running it, they can then do this.

This allows some really cool things.

Imagine if I’ve got a Rust program that prints hello world.

The browser WebAssembly runtime, via the WebAssembly System Interface, can decide to print that to console.log.

But if it’s running on a CDN they could print that to a log stream.

They could stream it to your customer’s S3 bucket. But if that’s running just locally via something like Wasmtime, which is Mozilla’s open source server WebAssembly runtime, you could just print that to stdout.

But because we’ve got this interface now, the compiler doesn’t need to worry about the runtime or know about the runtime.

And the runtime can do really cool things for this. Think about a file system.

If the WebAssembly module needs to open a file, the runtime could just create a fake file system. Again, having that security sandboxing model. So the browser could actually give a fake file system rather than giving access to the user’s real file system. The security benefits from this are incredible. We’re so excited about tools like Lucet and what standards like WASI mean for the future of the Web. Which is why at Fastly we’ve teamed up with Mozilla, Red Hat, and Intel, to form the Bytecode Alliance.

That Lucet project we’ve actually given to the alliance. We know it’s no longer our project because we think that this WebAssembly’s way much more than just what Fastly’s doing, or what other browser vendors are doing.

Go and check out the Bytecode Alliance to find out about more of these teams and some of the standards.

At the beginning of the talk, we discussed about the problems of running untrusted native code in the browser. But what we didn’t realize at the time when we were creating WebAssembly is that we’ve actually created a universal format that we can use to run native code safely anywhere, and extremely fast.

It is the new portable representation of any program. And for me, this is mind blowing, whether or not that’s on the browser, the edge, or the server.

It’s really starting to blur the lines between what we think about where an application can run. And it means that…

What’s really cool is it means that we can run the logic for our application in the place that it should be ran.

Some logic could run on the client efficiently. Some of it should be done on the edge.

But some of it should be done on the server. And with WebAssembly, I have this portability now. I think this is greatly summarized by Solomon Hykes, who’s the cofounder of Docker, in which he stated that, “If Wasm and WASI existed in 2008, “we wouldn’t have needed to have created Docker.” That is how important it is.

WebAssembly on the server is gonna be the future of computing.

I know that’s a bold statement, but I am starting to think that he might be right. All we needed was this standardized systems interface to do that, and that’s what specs like WASI are doing.

Literally, at this point I could just walk off the stage now because this tweet summarizes it for me.

And I hope that I’m gonna leave you with that kind of idea in your mind.

To finish, I just want to take a look at what’s left for the future of WebAssembly. When it first shipped in browsers, a lot of us, including myself, thought that that was it. I can only compile C programs that only operate on streams of data and send streams of data out, because I couldn’t talk to the world.

So it was only useful for certain use cases. But that is way beyond…

It could be way further from the truth.

We’re only at the MVP stage of WebAssembly. This is why I’m so excited.

Tim was saying when he introduced me that, with looking at the future future of the Web, like, this may take five years to get here, so I don’t think that I’m just talking fluffy nonsense. It’s gonna take a while, but we’re gonna get there and this is how amazing it could be.

The first one is which, in the Markdown example, we saw that WebAssembly’s biggest pain points is that it can only call the exported functions and only pass data to it in the form of a byte array. But every programming language has got a different type system.

So, a JavaScript string is different to how Rust represents a string, which is different to how Go represents a string. And WebAssembly has a different understanding of how a string should be represented in memory. But it should be possible to ship a WebAssembly module and have it run anywhere, without making life harder for either the person that’s written that module or the consumer of it.

So we needed this common translation layer between say the type system in JavaScript and the type system in WebAssembly.

And this is where the interface types proposal comes from. They’re still just at proposal stage, but vendors are starting to implement it.

This cartoon, I have to point out, is made by Lin Clark. If you haven’t seen Lin’s drawings or talks, please go and watch them.

She’s an amazing communicator, and boils down some really complex problems into quite simple cartoons.

And this wouldn’t be a WebAssembly talk without one of Lin’s cartoons.

This means that we would no longer need that glue code that wasm-bindgen was creating for me that I have to import my module like this.

And with interface types we’re just gonna be able to import Wasm modules directly, as if it was an ES6 module because the runtime knows about the interface types of how to map that string to the WebAssembly (mumbling). And for me this is like (mimics explosion) This opens up so many possibilities.

The moment for me is when you realize that we can have JavaScript dependencies like this. That my application imports a Wasm program that I got from npm, but that itself actually calls to another JavaScript file or TypeScript file. And another one of my dependencies imports a Wasm one but it just so happens that that was written in Rust, but the other one was written in C++, but me as the consumer, I didn’t even need to know that.

It’s just happening, and it’s just importing as if it was JavaScript. But people, those developers of those specific modules have said that, okay, for this one bit of my code I need it to run efficiently and take advantage of that optimization so that’s what I’m gonna do.

But you as the consumer don’t even need to know that that’s happening. And this could actually be happening in some of your imports already and you don’t even know that.

In fact, it’s happening for a bad reason on the Web of people trying to build cryptominers into adverts that are running on your devices now, all in WebAssembly.

But the advantage of them, the negative advantages, are amazing.

Runtimes like Node or Python’s CPython have already actually been doing this for a while. Who here uses node-sass? Right, not that many, there’s like four. (chuckles) Is there any more? Node-sass is written in C, and it’s got C bindings that every time you download or npm install something you might have seen a program called node-gyp run up. And what that’s doing is it’s having to actually compile the C to your instruction set, so your architecture, your x86.

But we wouldn’t have to do this with WebAssembly anymore because all we’d do as a developer is just ship the WebAssembly module and that could just run anywhere regardless of what’s consuming it.

It also means, like, a data science visualization tool that’s using Python, that same tool could now be reused in Ruby, or Go, and these runtimes can just import these modules. So it is this new standard portability way of running fast code.

Node’s taken the first steps towards this.

This is literally about two weeks ago.

They’re shipping WASI support in Node.

This with when V8 gets interface types mean we’re gonna have that mind blowing eureka moment that we can just start interopping between any WebAssembly module and JavaScript. Lastly, I don’t have enough time to go into all of the other upcoming proposals, but here’s some of the things that are really exciting me. First one is threads.

Modern CPUs have threads.

They can run multiple operations across cores at once. And so, for us to take true advantage of the speed benefits of WebAssembly in the browser we need to have threading support.

Problem with threading support is, how it’s implemented in browser runtimes, anyway, not the server runtimes, is that it needs support for the shared array buffers which we had to turn off because of Spectre. Chrome is the first people that have fortunately turned it back on recently, and they, in Google Chrome Developer Summit last week, gave a talk that showed about the new experimental support for WebAssembly threads. And one of the amazing stats that came out of that is that the Google Earth team at Google compiled their Google Earth using threaded WebAssembly, and they were able to increase their frame rate on Google Earth by 2x, just by turning on thread support. Which meant that their dropped frame rate dropped by 50%, which is amazing.

Next we needed SIMD.

This is another modern…

Modern architectures have, in their core, something called SIMD which allows us to, again, parallelize certain operations. They’re really useful for doing image or video transformation that you’re doing the same operation on the same numbers and you wanna do it multiple times.

But these two things combined, threads and SIMD, will essentially allow us to go to speeds that were previously impossible within a browser, and that really excites me.

The most important one is garbage collection. We need to be able to integrate with the browser’s garbage collector because when that happens we can compile things like JavaScript to WebAssembly itself, because JavaScript’s a good garbage collected language. The same with Go, the same with Ruby.

And that’s gonna open up a world of possibilities that I’ll talk about in just a second.

And lastly, we need better debugging support. Again, Chrome, last week, announced that you can now have much richer debugging support in the browser for WebAssembly, so you can get stack traces out of your Rust code that you’ve shipped to WebAssembly.

Lastly, I just wanted to leave you with this idea of what, to me, is this exciting moment of when we have those things like threads and garbage collection we can compile bits of React, say the virtual DOM diffing algorithm, in another language that takes support of multithreading, such as Rust, and compile just that bit of React to a WebAssembly module, ship that inside a web worker, or load it inside a web worker so the computation is happening off thread, but you as a consumer just important React.js as if it was your JavaScript module.

And to me, when we can do this with frameworks we’re gonna have a dramatic increase of performance for the Web globally without any of us really even noticing.

And that, for me, is why I’m so, so excited about the possibilities that WebAssembly in the browser holds.

Let’s just recap before Tim kicks me off the stage. Here’s some takeaways from all of this, which is, hopefully you may have learnt from this talk that WebAssembly isn’t gonna kill JS.

It’s just here to augment certain bits of the Web platform. It allows us to extend the Web platform using solutions that have been battle hardened across multiple languages.

We no longer are gonna have that single ecosystem, which is amazing.

And it’s not just for the browser.

We can run it on the edge, we can run it on the server. It is the new portability.

And it allows us to move this logic as close as possible to where the user needs it.

What you can do today is please go and try it out. If try out languages like Rust, as a JavaScript developer you actually find it quite ergonomic and friendly.

Profile your applications.

Identify those hot paths that you need that optimization, and consider porting that bit of your application to WebAssembly, or consider finding a tool, an off-the-shelf tool that’s already solved that problem for you in a different language.

And please, just share your finding.

Tell us what you’re doing.

In fact, can I just have, raise a hand, who here is actually using WebAssembly today in the browser? Wow, okay.

You people I really wanna talk to.

The rest of you, please go and try it out.

WebAssembly is the new standard for portability for the Web and beyond.

And I think the future is really bright, and I’m really excited by it.

Thank you.

(audience applauding) My slides, transcripts of this talk, slides, further reading, everything I’ve referenced, is up there if Notist has done its thing and magically loaded it.

Go and check it out.

– [Tim] We do have time for a couple questions, so if you’re cool…

There is a massively long list, by the way, of questions. So, for folks, we will not get to ask them all, so find Patrick and make him tell you the answers later. You’re good for that, right? – Yeah, please. – I think we all understand that WebAssembly is gonna kill JavaScript.

That was clear. – (chuckles) No.

(both chuckling) – No, but I think, like…

This is gonna be interesting to me, is figuring out where to use WebAssembly versus JavaScript. Should people be…

You mentioned hot parts and stuff like that. Is it basically anything that’s not DOM manipulation? – Well, yeah, no.

The other thing, I didn’t talk enough about what JavaScript’s really good for, and you’ve just alluded to one of those, which is DOM manipulation and talking to the DOM. There are certain things, and just the language. Programming in such a high-level language is really cool, and it’s really productive, which is why all of us do that.

So I don’t think that you should be rewriting all of your applications in another language. But just, I can’t stress this enough of profiling your applications and finding those hot paths.

And then thinking, is there another tool that I can do this more efficiently in, or could that benefit from parallelization, so could I run it with multiple threads, or SIMD? Yes, we can do that with web workers today, but not enough people are using…

Who here is using web workers in their application today? Cool, so like, not enough.

And again, this is something…

In fact, the talk that I was going to give instead of this was also a talk about how we can take advantage of web workers.

The most you push off the main thread, the better. I wanna call out, again, to Das Surma.

He did an amazing talk last week at Chrome Developer Summit all about moving logic off the main thread, and he goes into further depth about these kind of things of what should I parallelize.

JavaScript’s really good for some things.

Some things, Rust or other languages are better for. – [Tim] Does Wasm run off the main thread, or does it run on the main thread in browsers today? How does that, the execution actual bit…

– [Patrick] Either or, depending on how you instantiate it. – Okay.

How much does WebAssembly protect you from inefficient code? If I were to write something in JavaScript just terribly, horrible performance, let’s just assume, hypothetically.

And then I…

(laughing) I compile it into WebAssembly.

How does that process work? Does WebAssembly protect, to some extent, from the inefficiencies in the JavaScript, or is it possible to create a Wasm that is just bad? – It’s still possible to very much write bad code. I could write while true and leave an infinite loop that’s running, and even on the main thread, doing this. And that’s obviously gonna be bad.

But what it means is it’s much less likely that you’ll get what’s known as a deoptimization, or a deop. It’s really easy to move off the fast path very quickly with JavaScript by accident, and you might not even know this.

I was actually talking to Stoyan at Facebook last night about this, that he’s trying…

I don’t even know if I’m allowed to talk about this. Am I allowed to talk about your tool? – Yeah. – Yes?

He’s writing some tooling at Facebook in CI that they will then, he’s analyzing the JavaScript and he will tell, via CI, to their developers that you have managed to cause a JavaScript deop with the commit that you’ve just done.

And we need more tooling like that.

Because in JavaScript it’s so easy to go off the fast path. To answer your question, it’s not as easy to do that with other languages or with WebAssembly because we can do that compilation ahead of time. – A little bit of protection at least, okay. And then last question before we go to lunch then. On the Wasm side of things, is there…

How do we monitor the performance of the actual WebAssembly script itself? Is there any tooling that’s being built up around that? Is there anything within the standard that exposes a performance API through the WebAssembly stuff? – No, and that, I didn’t go into enough.

That’s one of my, that debugging point in the last thing, the proposals that are coming.

It’s something that’s really, it’s immature. It’s really immature, and we need to create more tooling around it. Firefox and Chrome are doing some amazing work to expose that.

There are things like source maps that are embedded inside WebAssembly via this thing called DWARF, and we’re gonna be able to visualize that if you’ve compiled the module with that type of data. But no, we need to improve the…

– Like you said, early days still, so that’s all… – But please, go and use it, please.

– Thank you, that was fantastic.

– Thanks, Tim.

(audience applauding)