- Thank you, Tim.
Firstly, thank you for that introduction, Tim. And thank you to the organizers for inviting me to be here in this wonderful city and share this stage with so many other wonderful humans. I'm honored and privileged to be here.
My name's Patrick Hamann.
As Tim said, I'm a principal engineer at Fastly. And you can catch me on the Twitters there, @patrickhamann. Although, I might stress I've actually taken a sabbatical off Twitter for the last year.
I encourage you all to do it.
It's very healthy.
But I'm back, and I'm gonna start jumping back in on the conversation there.
But please come and talk to me in the break or at the party later, about anything I talk about today, or just tell me what you're up to.
I really love to hear what people are doing in this space. I work at Fastly.
We're an edge cloud platform that the world's largest brands use to deliver their content as fast as possible. And at Fastly I work in a team called the office of the CTO. It's an emerging technologies group.
We're inside the engineering organization.
And I want to talk to you about one of my newly found passions as part of this team that we've been looking at for the last two years now. And that's WebAssembly.
I'm gonna discuss why we even need WebAssembly, what is it, how you can use it today.
We'll look at some practical use cases that people are already doing and the benefits, the speed, and the performance benefits you'll get from this. And finally we'll look at how WebAssembly can be used actually outside of the browser, which is an amazing thing, and what the future holds for it, especially with the new standards and specifications that are coming.
Before we discuss what WebAssembly is we need to take a step back and look at the history, and look why does it exist.
And so, let's talk about the Web platform that you all know and love.
For the last 20 years, the Web platform has largely consisted of these three technologies.
Initially we had HTML that was created by researchers at CERN as a way to share information and documents by way of hyperlink.
Occasionally we've had romantic flings to Adobe Flash and Java applets that have tried to change how we program on the Web, but they didn't last, and there are reasons why that happened.
But these three technologies have remained the same. It's truly amazing.
Later, with the birth of Ajax we could request fragments of the documents, even just data payloads, JSON payloads.
And our servers could stitch...
We're doing more and more compute now on the client. Some of you may think that's a good thing.
Some of you may think that's a bad thing.
But we're definitely seeing this paradigm change, this shift in paradigm.
What I mean by a high-level programming language is it's a language that can't directly access memory, whereas a low-level language is a type that can actually have impact on the underlying hardware and the physical memory that we write to.
Now, don't get me wrong.
It's an extremely large ecosystem.
In fact, it's the largest programming language ecosystem in the world. But we can't benefit from tools that have been written to solve problems in other languages that are well hardened and have been around for many, many years.
You don't have a choice to drop down to a lower level language if you want to.
So if I'm programming on the server I could just, say, use a program like Ruby. It's a high-level language, but it's productive. It allows me and my developers to get things done quickly.
But when we need to drop down to a lower level language like C or C++, when the problem needs to be critical and run as fast as possible, we can.
We have that freedom on the server.
But on the Web platform, we don't have that freedom to drop down to a lower level language, if and when we need to, for certain performance-critical operations. So what can we do about this? We could implement the C++ runtime or the Go runtime in web browsers.
But this would be, one, a security nightmare. Those languages have access to memory.
I don't want to allow any third party that I download inside my browser, or my customers download, to access my memory directly.
That would be a security nightmare.
It'd be a maintenance nightmare.
I'd have to remember that, oh, Chrome 74 ships with the Go 1.4 runtime. Again, that matrix of possibilities would just be too much. It just doesn't scale.
And so, this is where a team at Mozilla started to think about how we can solve this problem. And they created a tool called Emscripten.
Many of you might have heard of it.
And this is where WebAssembly comes in.
It still had to be delivered via plain text in the browser and interpreted and parsed, and that compute still takes a lot of time. So on 2015, the WebAssembly Working Group was formed and the whitepaper was released.
And it came out of this work that Mozilla did, and asm.js did, and Google did, on a program called NaCl, which is Native Client. And it has all the things that you'd expect from a standard on the Web platform now.
It has a working group, it has standards.
You can go and look at them on W3C, et cetera. Hopefully now we've got an understanding of how WebAssembly came to exist.
Let's look at what it actually is.
And many people just think it's C++ for the browser, and it definitely isn't that, and it's so much more. Let's talk about what it isn't first.
It's not a programming language.
You don't actually write it by hand.
It's a compilation target.
So you write other host languages that compile to that. But I like to think of it as just another tool in our toolbox that we can use to solve certain problems on the Web platform and extend the Web platform beyond the capabilities that it had before. So it's a compilation target.
It's for other languages to compile to.
You don't write it yourself.
You write a host language, say Rust, or Go, or C++, and then you compile that language to it.
If we were to look at the WebAssembly website this is literally what it says.
It says, "WebAssembly, abbreviated Wasm," which you might hear me say throughout the rest of this talk, "is a binary instruction format "for a stack-based virtual machine." Right now you're probably thinking, or you might be thinking, "W-T-F, Patrick, what?" And I definitely did the first time I started learning about it.
Let's pick that apart.
A binary instruction format.
This means it is a set of instructions for a machine to process that has been encoded in a binary format.
It's encoded in binary.
Especially when combined with techniques like streaming compilation, so we can compile it as the bytes come in down the wire, and better than gzip compression, you've now got a format, a binary instruction format that is designed to be ran extremely fast within web browsers.
It's a stack-based virtual machine.
There's very different...
There's a couple of different ways that you can represent a machine like a CPU. One of those is stack-based, and the other is register-based.
The CPU in your device right now is register-based. But you don't really have to worry about this. What's more important is that it's a virtual machine. It's a processor that doesn't actually exist. We're not writing the WebAssembly to run on a specific processor, an x86 or ARM, some specific hardware.
We just need to compile it for this virtual machine, much like the JVM, the Java Virtual Machine, if you know about that.
And the cool thing about this is that the compilers, the languages, Rust or C++, don't have to know about how the hardware works. They all can target this one virtual machine. And that makes it really portable, which is one of its greatest benefits.
WebAssembly's spec is pretty simple.
It's like this.
This is on the W3C website.
It only knows about numbers, and bytes, and integers, floats, names in memory.
That is the type system.
It is that small.
And it also has operations and instructions that you can operate on those types, so like add, subtract, multiply.
But the really interesting thing is that actually a lot of very complex programs written in these low-level languages can be reduced down to just this set of numbers in memory, and then moving those numbers around in memory. Most software that we write actually boils down to that. So when you write some code in a low-level language, normally the compiler will compile it to something that's known as an IR, an intermediate representation.
And then that's the same regardless of the system architecture that you're compiling to. But then all these compilers then have to have backends for each target that you're targeting.
So ARM for mobile devices, x86, which is Intel's assembly instructions that maybe many of your laptops that you've got here in the audience are gonna be using. And so, when you do that, as a developer in one of those languages you've gotta compile all your source code to six all different instructions.
When you go and download a binary, you have to go, "Ah, I need to find the Darwin x86 one "for my MacBook Pro that I'm running here." But with WebAssembly you don't need to worry about that. The code just gets compiled to the WebAssembly's virtual machine instructions, which is normally shipped around in a .wasm binary file. And so the runtime then takes care of how that is converted to assembly on the machines that you're running.
And because the WebAssembly instruction set has got that really small API that we just saw, it's really easy for host languages to target WebAssembly. So some of the compilers, it only took them months to integrate WebAssembly. And we're gonna see that adoption increase dramatically over the next five years.
For it to succeed, WebAssembly had a few guiding principles to ensure that we solved the problems that it was set out to do that I outlined in that first section.
Firstly, it's gotta be a compilation target, we've just discussed what that means, so other language's toolchains can target it. It has to be fast to execute, and compact.
And this is really, really important when we're talking about the Web platform.
When we're shipping native code they're normally quite large files.
In fact, a lot of programs are already preinstalled on your computers. But on the Web we don't have that benefit.
We can't benefit from any precompiled programs, and so we've gotta recompile them every time in the browser. So it has to be as compact and as fast to execute as possible if we're gonna be able to ship native applications to the browser.
And it's got a linear memory model.
And this is what I've discussed about how Emscripten works, about only you're supplying the program with a specific slice of memory and that's the only thing it's allowed to operate in. And this is one of the most fundamental points about WebAssembly, is that it has a memory sandbox and so you, when you instantiate this program, are only allowed to use this array of memory, and it starts here, and it ends here.
And so the program, it's physically impossible for it to go outside of that.
And this makes it really, really powerful and also makes it really easy now for programs to be compiled to run safely.
That's the most important thing of untrusted native code in your browsers.
To understand what's happening here I think it's actually better to just visualize it, the relationship between the source language, WebAssembly, and then the instructions that they then actually get converted to by the browser's runtime.
Here we've got a simple...
This is a tool called WasmExplorer made by Michael Bebenita.
He's on the WebAssembly team at Mozilla.
You can check it out there.
This is a really simple C program.
It adds A to B.
A and B are integers.
You can see here, this is the stack machine that WebAssembly has created for it.
We've got an int, i32, and we're calling the add instruction on it. And then you can see directly how this relates to x86 assembly in the browser. You can see that add operation.
I've also lied to you.
I said that WebAssembly is a binary format, and this is text.
How is that even possible? This is called WAT, which is the WebAssembly Text format. And then assembly is the text representation of the binary machine code.
And so you can see, for the first time ever, we now, if we want to, can have direct control on the operations that a program in our browser is running on the hardware. Which, for me, is amazing.
You might be asking what the support is like for one of these.
For me, that is amazing.
It's across the entire Web platform now, so you can use it. It's over 88%, we can see over there.
To summarize, WebAssembly is a new language for the Web. It's compiled from other languages that gives us native speed, consistency, and reliability in the browser.
And I'm gonna talk about that reliability a bit later. And it's the first time ever that we have a portable representation of native programs that we can ship anywhere and run safely in the browser. And for me, this is amazing.
We now have a better understanding of why WebAssembly came to exist, and maybe an understanding of what it actually is. Let's look at how we can use it.
Here this is exporting a function called fibonacci, and I'm passing 42 into it.
This is the preferred way of doing it now.
Before, you had to synchronously load it, but here we can do the instantiateStreaming. It uses the fetch API, which returns a promise. And the cool thing about this, with instantiateStreaming it means that as the bytes are coming off the wire we can start to compile that program.
Which means by the time the full Wasm file is loaded we've actually already compiled it and we can then load it instantly.
And if we combine that with things like implicit HTTP caching...
But with WebAssembly, the first time that I've compiled it the browser can then store that compiled asset in the HTTP cache so the next time that file is requested, it runs instantly. No downloading, no compilation.
Instant, very fast, reliable performance.
Let's actually look at a genuine use case.
Let's imagine that I am GitHub, or a news organization, and I want to accept comments in the form of Markdown on my website.
I've loaded a Markdown parser from npm, I npm installed it, and it's doing the job for me well.
Let's have a look at the code.
I've chosen this example because it shows that you don't even need to know Rust that much to take advantage of preexisting solutions to problems that you may have.
Firstly, I'm importing a library called pulldown-cmark, which is our Markdown parser, and the HTML and parser functions from that. I'm exporting a public function called render that accepts a string and returns a string. It accepts our input string, which is the Markdown, and it's gonna return the HTML.
I then invoke the parser.
I create a new HTML output string, and I write the HTML output to that string and then I return it.
It's eight lines of Rust.
You can't just pass it the Markdown string. And so this is where we need to create some JS glue code that when you give it a string it converts that string into a byte array and then passes that byte array, or the pointer in memory, to the WebAssembly module.
And this is where wasm-bindgen comes in.
And so I can continue to program at this higher level, and just pass strings and accept strings and not have to worry about how that data is getting converted to numbers and memory under the hood. This is actually gonna be replaced by a proposal called interface types, which I'm gonna discuss later.
The next tool we're gonna use is something called wasm-pack. This is much like webpack but for Wasm.
It's made by Ashley Williams and the Rust WebAssembly working group.
They're doing some amazing work in this space to try and make Rust and WebAssembly more accessible to all of us as developers.
And so, when I run this wasm-pack build on that Rust source code that we just saw it will spit out a Wasm file.
So it's compiled it for me.
It's even nice enough to export some TypeScript type definition files for us as well.
Every time sometime types, we pass that value to our render function which is exposed by the JS glue code, and that's gonna call the Wasm parser.
Then we write it to our output element.
Don't worry too much about this code.
It's in my slides, you can link to it.
Let's have a look at what that looks like.
The interesting thing here is that I haven't done any video trickery.
I haven't sped up the frames in this to make it look like it's as fast as it is. This is literally how fast it is.
And I didn't need to use a debounce or throttle function. I probably should have anyway, but I didn't need to because it can accept the input that fast and return it to HTML that fast.
For me, this is mind blowing.
I think it's pretty cool.
We know what WebAssembly is.
We look at one example of how we can use it in the browser. Let's have a look at a couple of practical examples that people are using it for in the wild today, and some of the research that's been coming out around this in the last year or two.
PSPDFKit is one of the most prolific PDF renderers on the Web.
Whenever you load a PDF inside your Dropbox account, or you sign a PDF on DocuSign, or you read a PDF that's embedded within a news article in The Guardian, they're all using this library called PSPDFKit to do that. Previously, prior to 2016, they only offered a native version that you could embed in your native applications to do this. But then they realized they needed a web UI. But when they first shipped it, what they did is that when you loaded up that embedded component on the web page it had to make a call back to the server.
The server had to render the PDF as an image, send that image back.
And so for a large PDF file with 20 files, that's extremely inefficient and causing a lot of latency for the experience. But their core PDF renderer is actually written in C++. So what they could do is that they can compile their core renderer and ship that as WebAssembly to the browser and now have the PDF rendered directly in the browser, and you completely eliminate all of those requests and that latency to do that.
So I think it's really cool.
Squoosh.app some of you may have seen.
It was released by the Chrome Developer Relations team this year at Google I/O.
It was an image compression tool especially for the performance of us in this audience. It's actually a really cool online tool.
Think of it like Adobe Photoshop but in the browser, and you can tweak settings of an image to see how well it compresses before exporting it again. To do this, they compiled various native libraries such as mozjpeg or Google's libwebp, from C++. So tools that weren't even written to run in the browser now can run in the browser via Emscripten.
Then they just allow you to change the settings. And all of this is happening in realtime, which I think is amazing.
But the point here is that WebAssembly allowed them to polyfill the Web platform for features that don't exist.
Yes, some browsers have support for WebP, for mozjpeg, but some of them don't.
We can use WebAssembly to polyfill the Web but do that secure and fast.
Part of that project, they did some research. This is for one of the functions, which was the image rotate function that you just saw in that previous video.
But some engines are much slower.
In the eBay mobile app, when you are a seller and you're getting your listing ready, they have this feature in the native app that allows you to take a picture of a barcode. And the barcode scanner will work out the product you're trying to sell and prepopulate all of the fields.
I think that's a cool user experience.
I'm much more likely, as a seller, to sell something if I don't have to type out all of the product details.
But they didn't have this on the Web because the Web doesn't have a native barcode scanner. Yes, you can do it with the Canvas API, taking a picture then reading all the pixels, but that's gonna be a lot of work.
And so, what they did is...
Well, in fact, they did do that.
Then they have this custom C++ one that they're using in their mobile app already. And then they had an off-the-shelf open source tool that has been hardened and written for years called ZBar. They did two experiments.
The first one is that they A/B tested not having a scanner at all on the web device, and then having a scanner.
And the open source implementation that has obviously been contributed to by many people over the years, and is very optimized, contributed to over 50% of completions.
And to me, that shows two things.
But two, it shows that just using existing tools, open source ones, you don't have to write them yourself, to solve your problems is a really, really good thing.
And WebAssembly allows us to do that now.
Any part of your application that you're currently doing on the server and that's causing hot paths, think about that and see if you can port them to WebAssembly.
Such as encoding file formats, parallelizing of data, data visualization.
A lot of data visualization that we're doing on the Web now is really intensive and taking a lot of data. This is a perfect use case for WebAssembly. And the list is endless.
So far we've looked at WebAssembly in the browser, but what about beyond the browser? It's the title of my talk, after all.
That means that I can now use WebAssembly modules inside my server-side applications and Node can benefit from all those same benefits that the browser was doing, efficiently and safely.
And it's this safety factor that I wanna talk a lot about. One of the problems with running native software, especially on our servers and in our browser, is the trust guarantees of it.
It's capable of reading and writing to memory, and that's extremely dangerous.
Which is why we go to great efforts in our production software on our servers to make sure that our software is isolated from other users on that same server.
Browsers handle this efficiently with their sandboxing model already, because of things like tabs.
And that's an extremely, extremely useful thing. This is why certain CDN vendors have started to use V8 as their engine to process billions of requests a second using that isolation model, so you get the trust guarantees that V8 and the browser sandbox model has come in forever. So you can spin up a single instance of V8 and have that isolation model already, compared to, say, virtual machines that you use in AWS or GCP.
The memory and CPU overhead of those you just couldn't do on every request inside a CDN. I actually commend this use of technology.
We learnt that WebAssembly's design means that we get that memory isolation and trust encapsulation for free.
So maybe, actually, WebAssembly is really useful outside of the browser.
This is why Fastly's taking that idea a bit further, and we're not even using V8 as our WebAssembly runtime. What if we could just run the WebAssembly directly on the server processing millions of requests a second? And so we've been working on exactly this.
We've written a WebAssembly compiler that allows you to compile any code to WebAssembly and run that on the edge, but extremely fast. We can spin up a new instance of WebAssembly in under 35 microseconds.
And I need to always remind myself this is not milliseconds. This is microseconds.
Compared to V8, it takes around five milliseconds to spin up and 10s of megabytes for each isolate.
We can do this in 34 microseconds with only kilobytes overhead of memory.
And this is possible with Lucet, which is our open source sandboxing WebAssembly compiler. And this works by compiling the WebAssembly module ahead of time, so not using just-in-time compilation. So we can do that once and distribute that everywhere on the network and then instantiate the modules incredibly quickly, which we why we get those trust and speed guarantees. You can go and read about it here.
We've open sourced it.
Please go and use it in your own technology. What type of thing does this enable, having WebAssembly outside of the browser now? It means that, for instance, I could run a GraphQL server on the edge, taking advantage of HTTP caching within CDNs and talking to my origins.
But my origins, that same code, I could have a GraphQL server running over there. Or I could have part of the GraphQL server running on the client here, all from the same code because it's WebAssembly everywhere.
But to do this one of the problems that you might have thought about is that WebAssembly can only operate on numbers and bytes in memory, so it can't actually talk to the system.
It can't make a network request.
It can't read from the file system.
I've just said that I can run WebAssembly on the server. How can I make a network request inside my GraphQL application if WebAssembly can't do that? What we need is an interface for WebAssembly to be able to talk to the hardware that it's running on, or the runtime, to actually make network requests.
This is why Mozilla started to define what's called the WebAssembly System Interface. And it's a specification of standardized interface between the WebAssembly module and the runtime it's talking to.
That allows the compiler to say, okay, when I wanna make a network request it's gonna compile it down to that same interface function. And regardless of the runtime that's running it, they can then do this.
This allows some really cool things.
Imagine if I've got a Rust program that prints hello world.
The browser WebAssembly runtime, via the WebAssembly System Interface, can decide to print that to console.log.
But if it's running on a CDN they could print that to a log stream.
They could stream it to your customer's S3 bucket. But if that's running just locally via something like Wasmtime, which is Mozilla's open source server WebAssembly runtime, you could just print that to stdout.
But because we've got this interface now, the compiler doesn't need to worry about the runtime or know about the runtime.
And the runtime can do really cool things for this. Think about a file system.
If the WebAssembly module needs to open a file, the runtime could just create a fake file system. Again, having that security sandboxing model. So the browser could actually give a fake file system rather than giving access to the user's real file system. The security benefits from this are incredible. We're so excited about tools like Lucet and what standards like WASI mean for the future of the Web. Which is why at Fastly we've teamed up with Mozilla, Red Hat, and Intel, to form the Bytecode Alliance.
That Lucet project we've actually given to the alliance. We know it's no longer our project because we think that this WebAssembly's way much more than just what Fastly's doing, or what other browser vendors are doing.
Go and check out the Bytecode Alliance to find out about more of these teams and some of the standards.
At the beginning of the talk, we discussed about the problems of running untrusted native code in the browser. But what we didn't realize at the time when we were creating WebAssembly is that we've actually created a universal format that we can use to run native code safely anywhere, and extremely fast.
It is the new portable representation of any program. And for me, this is mind blowing, whether or not that's on the browser, the edge, or the server.
It's really starting to blur the lines between what we think about where an application can run. And it means that...
What's really cool is it means that we can run the logic for our application in the place that it should be ran.
Some logic could run on the client efficiently. Some of it should be done on the edge.
But some of it should be done on the server. And with WebAssembly, I have this portability now. I think this is greatly summarized by Solomon Hykes, who's the cofounder of Docker, in which he stated that, "If Wasm and WASI existed in 2008, "we wouldn't have needed to have created Docker." That is how important it is.
WebAssembly on the server is gonna be the future of computing.
I know that's a bold statement, but I am starting to think that he might be right. All we needed was this standardized systems interface to do that, and that's what specs like WASI are doing.
Literally, at this point I could just walk off the stage now because this tweet summarizes it for me.
And I hope that I'm gonna leave you with that kind of idea in your mind.
To finish, I just want to take a look at what's left for the future of WebAssembly. When it first shipped in browsers, a lot of us, including myself, thought that that was it. I can only compile C programs that only operate on streams of data and send streams of data out, because I couldn't talk to the world.
So it was only useful for certain use cases. But that is way beyond...
It could be way further from the truth.
We're only at the MVP stage of WebAssembly. This is why I'm so excited.
Tim was saying when he introduced me that, with looking at the future future of the Web, like, this may take five years to get here, so I don't think that I'm just talking fluffy nonsense. It's gonna take a while, but we're gonna get there and this is how amazing it could be.
The first one is which, in the Markdown example, we saw that WebAssembly's biggest pain points is that it can only call the exported functions and only pass data to it in the form of a byte array. But every programming language has got a different type system.
And this is where the interface types proposal comes from. They're still just at proposal stage, but vendors are starting to implement it.
This cartoon, I have to point out, is made by Lin Clark. If you haven't seen Lin's drawings or talks, please go and watch them.
She's an amazing communicator, and boils down some really complex problems into quite simple cartoons.
And this wouldn't be a WebAssembly talk without one of Lin's cartoons.
This means that we would no longer need that glue code that wasm-bindgen was creating for me that I have to import my module like this.
And with interface types we're just gonna be able to import Wasm modules directly, as if it was an ES6 module because the runtime knows about the interface types of how to map that string to the WebAssembly (mumbling). And for me this is like (mimics explosion) This opens up so many possibilities.
But you as the consumer don't even need to know that that's happening. And this could actually be happening in some of your imports already and you don't even know that.
In fact, it's happening for a bad reason on the Web of people trying to build cryptominers into adverts that are running on your devices now, all in WebAssembly.
But the advantage of them, the negative advantages, are amazing.
Runtimes like Node or Python's CPython have already actually been doing this for a while. Who here uses node-sass? Right, not that many, there's like four. (chuckles) Is there any more? Node-sass is written in C, and it's got C bindings that every time you download or npm install something you might have seen a program called node-gyp run up. And what that's doing is it's having to actually compile the C to your instruction set, so your architecture, your x86.
But we wouldn't have to do this with WebAssembly anymore because all we'd do as a developer is just ship the WebAssembly module and that could just run anywhere regardless of what's consuming it.
It also means, like, a data science visualization tool that's using Python, that same tool could now be reused in Ruby, or Go, and these runtimes can just import these modules. So it is this new standard portability way of running fast code.
Node's taken the first steps towards this.
This is literally about two weeks ago.
They're shipping WASI support in Node.
Modern CPUs have threads.
They can run multiple operations across cores at once. And so, for us to take true advantage of the speed benefits of WebAssembly in the browser we need to have threading support.
Problem with threading support is, how it's implemented in browser runtimes, anyway, not the server runtimes, is that it needs support for the shared array buffers which we had to turn off because of Spectre. Chrome is the first people that have fortunately turned it back on recently, and they, in Google Chrome Developer Summit last week, gave a talk that showed about the new experimental support for WebAssembly threads. And one of the amazing stats that came out of that is that the Google Earth team at Google compiled their Google Earth using threaded WebAssembly, and they were able to increase their frame rate on Google Earth by 2x, just by turning on thread support. Which meant that their dropped frame rate dropped by 50%, which is amazing.
Next we needed SIMD.
This is another modern...
Modern architectures have, in their core, something called SIMD which allows us to, again, parallelize certain operations. They're really useful for doing image or video transformation that you're doing the same operation on the same numbers and you wanna do it multiple times.
But these two things combined, threads and SIMD, will essentially allow us to go to speeds that were previously impossible within a browser, and that really excites me.
And that's gonna open up a world of possibilities that I'll talk about in just a second.
And lastly, we need better debugging support. Again, Chrome, last week, announced that you can now have much richer debugging support in the browser for WebAssembly, so you can get stack traces out of your Rust code that you've shipped to WebAssembly.
And to me, when we can do this with frameworks we're gonna have a dramatic increase of performance for the Web globally without any of us really even noticing.
And that, for me, is why I'm so, so excited about the possibilities that WebAssembly in the browser holds.
Let's just recap before Tim kicks me off the stage. Here's some takeaways from all of this, which is, hopefully you may have learnt from this talk that WebAssembly isn't gonna kill JS.
It's just here to augment certain bits of the Web platform. It allows us to extend the Web platform using solutions that have been battle hardened across multiple languages.
We no longer are gonna have that single ecosystem, which is amazing.
And it's not just for the browser.
We can run it on the edge, we can run it on the server. It is the new portability.
And it allows us to move this logic as close as possible to where the user needs it.
Profile your applications.
Identify those hot paths that you need that optimization, and consider porting that bit of your application to WebAssembly, or consider finding a tool, an off-the-shelf tool that's already solved that problem for you in a different language.
And please, just share your finding.
Tell us what you're doing.
In fact, can I just have, raise a hand, who here is actually using WebAssembly today in the browser? Wow, okay.
You people I really wanna talk to.
The rest of you, please go and try it out.
WebAssembly is the new standard for portability for the Web and beyond.
And I think the future is really bright, and I'm really excited by it.
(audience applauding) My slides, transcripts of this talk, slides, further reading, everything I've referenced, is up there if Notist has done its thing and magically loaded it.
Go and check it out.
- [Tim] We do have time for a couple questions, so if you're cool...
That was clear. - (chuckles) No.
(both chuckling) - No, but I think, like...
You mentioned hot parts and stuff like that. Is it basically anything that's not DOM manipulation? - Well, yeah, no.
So I don't think that you should be rewriting all of your applications in another language. But just, I can't stress this enough of profiling your applications and finding those hot paths.
And then thinking, is there another tool that I can do this more efficiently in, or could that benefit from parallelization, so could I run it with multiple threads, or SIMD? Yes, we can do that with web workers today, but not enough people are using...
Who here is using web workers in their application today? Cool, so like, not enough.
And again, this is something...
In fact, the talk that I was going to give instead of this was also a talk about how we can take advantage of web workers.
The most you push off the main thread, the better. I wanna call out, again, to Das Surma.
He did an amazing talk last week at Chrome Developer Summit all about moving logic off the main thread, and he goes into further depth about these kind of things of what should I parallelize.
Some things, Rust or other languages are better for. - [Tim] Does Wasm run off the main thread, or does it run on the main thread in browsers today? How does that, the execution actual bit...
- [Patrick] Either or, depending on how you instantiate it. - Okay.
And then I...
(laughing) I compile it into WebAssembly.
I was actually talking to Stoyan at Facebook last night about this, that he's trying...
I don't even know if I'm allowed to talk about this. Am I allowed to talk about your tool? - Yeah. - Yes?
And we need more tooling like that.
How do we monitor the performance of the actual WebAssembly script itself? Is there any tooling that's being built up around that? Is there anything within the standard that exposes a performance API through the WebAssembly stuff? - No, and that, I didn't go into enough.
That's one of my, that debugging point in the last thing, the proposals that are coming.
It's something that's really, it's immature. It's really immature, and we need to create more tooling around it. Firefox and Chrome are doing some amazing work to expose that.
There are things like source maps that are embedded inside WebAssembly via this thing called DWARF, and we're gonna be able to visualize that if you've compiled the module with that type of data. But no, we need to improve the...
- Like you said, early days still, so that's all... - But please, go and use it, please.
- Thank you, that was fantastic.
- Thanks, Tim.