Global scope is streaming to you from a place now called Sydney.

And I would like to begin by acknowledging the Gadigal people of the Eora nation, the traditional custodians of the land from which we are streaming.

In the spirit of reconciliation we acknowledge the traditional custodians of country throughout Australia and their connections to land sea and Community.

We pay our respect to their elders past and present and extend that respect to all Torres Strait Islander and Aboriginal peoples and all first nation peoples today.

Welcome back to day two of Global Scope.

We've got another half a dozen presentations on the most popular, or at least widely used programming language in the world, JavaScript for you today.

Hi, I'm Rosemary from Web Directions.

At Web Directions we've long worked hard to create inclusive, welcoming, respectful environments for everyone involved-speakers, attendees, and partners.

That's why we adopted a code of conduct many years ago, and why we ask that everyone involved with the event adheres to it-ourselves, our speakers, our partners, and our attendees.

If you have any concerns about behavior during the event or any questions at all, don't hesitate to contact me at the front desk available from the sidebar.

I'll be there the entire time, and we'll get back to you straight away.

The functional programming community treats function composition with a great deal of respect, but what's the big deal?

James Sinclair is a senior developer with Atlassian and today sheds some light on precisely that.

All right.

Hi there.

My name's James.

I work for a little company called Atlassian and in my spare time, I like to write about JavaScript, but today I'm here to talk to you about function composition.

And that might not sound like the most exciting subject in the world.

I mean function composition is not going to be the next viral Tiktok sensation.

But if you talk to functional programmers, well, then it's a different story.

The way they go on, you'd be forgiven for thinking that composition was some kind of divine truth etched on stone tablets, by the very hand of God.

Or perhaps some sort of AI powered tool that writes code for you.

But it's not that.

Which raises the question.

What is it, then?

What's this wonderful thing called "composition".

And what's so special about it?

Well, to answer that we are gonna have to do something that's a little bit scary.

So brace yourselves, we're going to dive into the dark arts and do some mathematics.

So here's the equation for composition.

And this one little equation explains everything there is to know.

It reads h of X equals f of g of x.

And that one tiny equation captures all there is to know about function composition.

If you get this, everything else is just implications.

So I'll break it down piece by piece.

On the right hand side of that equation, we have two functions f and g, and we're talking about mathematical functions right now, but JavaScript of functions, we can kind of, sort of think of them as equivalent.

Now when we compose two functions together, we get a new function and we're calling this one h, and if we give h some value x, then we get back a new value.

Now to work out what that value is when we first pass our value into x value x into g, because we work from the inside out, then we take the value return from that and pass it into f.

And that gives us our final result-h of x.

And I get it.

All of this is kind of abstract.

But the key point here is that we can create a new function h by combining two other functions: f and g.

And once we have that, we can treat it like any other function.

It's no different from f or g-doesn't need any special treatment.

It's a plain function, just like any other function.

We don't need any special tools to do this.

It's kind of built into how functions work.

And mathematicians, they have a special notation for describing composition.

They use this thing called the dot operator, which looks a little bit like a bullet.

So we'd write the composition of f and g as f dot g.

And that's the mathematical explanation.

And by now you're probably thinking, well, well done, James, you've done a great job of demonstrating that composition is both boring and kind of obvious.

I mean, most of you will probably be intuitively familiar with how functions work.

None of this is particularly new or mind blowing.

And what does this have anything to do with JavaScript?

Well, like mathematics, composition is built into JavaScript functions too.

So for example, if we have two functions, f and g and they both return values, then we can compose them together using a similar syntax.

So here we have cost h equals a function that takes some value x and returns, the composition of f and g.

Now this here, what I've put on the screen, is not valid JavaScript, though I wish it was.

JavaScript won't let us define a bullet operator for composition, but what we can do is make a function that does roughly the same thing is what that bullet would do.

You see, one of the neat things about JavaScript is that it lets us pass functions around as values.

So we can create a function that does composition for us.

We'll call it c2 short for compose two functions.

And at first this might seem like a pointless kind of function and you'd be right.

It is a little bit pointless, but bear with me.

Let's see if we can do something with this composition function.

So.

What we're gonna do is do something real.

Let's imagine for a moment that we are writing some kind of comment system, like maybe say hypothetically, just at random, you work for a company that makes some kind of project management software, you know, and maybe people could say create tasks to track what they're doing, and they can leave comments on these tasks, just picking something, just totally out of the blue.

So with our comment system, what we wanna do is we wanna allow people to include images and links in their comments, but we're also concerned about security.

So we don't wanna allow people to just write any old HTML.

So to make this happen, what we're gonna do is we're gonna support a cut down version of markdown.

So in our cut down version of markdown, we allow people to write an image that looks like this.

So we've got an exclamation point followed by some square brackets, then the alt text, and then some round brackets to describe the path to the image.

And a link looks fairly similar, but there's no exclamation point.

So we've got the square brackets followed by the link text and then round brackets for the URL.

And we'll write a couple of functions that convert this kind of syntax into HTML.

So.

We've got two functions here, one for images, one for links.

And each one looks through the whole comment, finds the syntax we've specified and replaces each occurrence with the relevant HTML.

Now I know regular expressions look a bit scary and these ones are particularly indecipherable because of the fancy font I've chosen, but don't worry too much about the implementation details.

See regular expressions aren't the point, they're just for demonstration, the only thing you really need to understand is that they both take in a string and they give us a string back with some adjustments made.

That's, that's all.

So getting back into our c2 function, we can combine these two functions like this.

So we pass Linky and magnify into c2 and we get its mus them together so that we get one new function, which we've called Linky and a mag.

Now, if you were to write that same thing out by hand, it would look like this and truth be told our c2 function isn't actually saving as many characters here.

And it gets worse if we try to add in more functions.

Like suppose we wanted to, to add in support for emphasizing with underscores.

We could write another function like this one, it's nothing fancy, just another regular expression replacement.

It looks for an underscore, followed by a bunch of stuff that's not an underscore followed by an actual underscore.

And then it wraps whatever's between the underscores in em tags.

Now we could take that emphasize function and we could add it in with another c2.

But what we have to do is create a new function using c2 on the inner inner part.

And that gives us a new function back, which we passed straight into another c2 which we combine with linkify.

And that gives us our single processComment function.

Lots of smooshing going on.

Now, if we compare that with writing the composition out by hand, the hand compose version is still longer, but not by much.

And going back to c2 for a moment, you can see how, if we were to keep adding functions, we're gonna have a lot of c2s and a lot of brackets all over the place.

So, what would be nice is if there was some way we could just chain a bunch of functions together without all those annoying brackets and commas everywhere.

Kind of like that imaginary bullet syntax I showed earlier, if we wanted more than to compose more than two functions, we'd just add them to the end with another bullet.

And then it would work just a little bit like doing addition or multiplication.

But alas JavaScript doesn't let us.

There is however, a TC39 proposal that might let us do something similar in future.

It's called the operator overloading proposal.

And it's very interesting.

I suggest you check it out.

In the meantime, however, we don't have operator overloading, so we're gonna have to make, do with what we have and what we can do is multivariate composition.

We can create a function that does composition for us.

Now "multivariate" is just a fancy word that means involving multiple quantities.

So a multivariate function is a function that takes a varying number of parameters.

And in JavaScript, we can create a multivariate function using rest parameter syntax.

So have a look at this function definition here.

So we're creating a function called compose and instead of a list of parameters, we've written three dots followed by funcs.

Now, if you're not familiar with the way rest parameters work, all that's going on is those three dots tell the interpreter, just shove however many arguments we get into an array and call that array 'funcs'.

So JavaScript gives us the ability to create a function that takes different numbers of parameters and that's useful.

But before we go any further, let's stop and have a think about what we're doing.

We wanna take a list of functions and smush them together so that we get one new function and that new function is gonna take a single value as its input.

And it's gonna pass it to the first function in our list.

Oh, actually the last then it's gonna take the result of that, pass it to the next function, take the result of that, pass it to the next function and so on.

And in other words, we're gonna loop through our list of functions and we're gonna carry a little bit of state with this as we go.

Which sounds a lot like a 'reduce' operation to me.

So if we put that all together, we can get something like this.

We've got this new function called 'compose' and its using our fancy rest parameters syntax, and it returns a new function, which I've creatively called 'newFunctionToReturn'.

And that function, it takes a single parameter x0, and we call reduceRight on funcs to loop through each of those functions.

Passing it x0 as the initial value.

Now you might be wondering why we're using reduceRight, rather than reduce well that's because composition works from the inside out.

We wanna call those innermost functions first and we'll come back to that in a moment.

But for now, the main thing to notice is that we've created a function that returns another function.

And with a bit of tidying, we can get rid of that funny variable name and we reduce it down to this.

It does, it's doing the same thing just without that newFunctionToReturn variable.

So let's try out this shiny new function, but first let's increase the difficulty level by adding a new requirement to our comment system.

So what we wanna do is we allow level three headings, and we mark a level three heading by putting three hashes at the start of a line.

So here's a function that'll do that for us.

And once again, it's not super important how it works.

It's a regular expression that looks for three hashes at the start of a line followed by some whitespace.

If it finds that it grabs the rest of the line and makes that a heading.

So we've got four functions to run in our composition.

And we use our shiny new 'compose' function like this.

And to me, this looks rather neat.

We've got a new function 'processComment' that's composed of four small single responsibility functions joined together.

But there is a small difficulty with this function.

And that is, we end up writing our functions in the reverse order of how they execute.

That is we've written 'headalize' last, but it's actually the first function that runs.

And if we map it out, the data flows through this processComment function like this.

So we pass the first value into headlalize, then the result of that into emphasize and then into imagify and so on.

So the data's flowing from right to left or bottom to top as the case may be.

And as I've been saying, it works that way, because if we wrote that composition out by hand, it would be the innermost function headalize that we want to call first.

And that's why we we're using reduceRight.

We go wanna preserve that order.

But if we're gonna write out our compose functions in a vertical list, like this one, there's no reason why we can't create a composition function that composes in the opposite direction.

That way we could have the data flowing from left to right or top to bottom as the case may be.

And it would flow more naturally.

And so we'll call that function "flow".

So to create flow, all we do is we replace that reduceRight with reduce, and it looks like this.

As you can see, the only difference is that reduce method call.

Now to show, show it off in action.

What we're gonna do is we're gonna add yet another requirement to our comment processing system.

We wanna allow for text between back ticks to be formatted as code.

Now once again, we've got a function that does that for us, again, with regular expressions-don't try this at home kids-but all our regular expression is doing is looking for a back tick, followed by a bunch of characters that aren't back ticks followed by another back.

Then it grabs whatever is between those two back ticks and wraps them in code tags.

It's a lot like the emphasize function from earlier.

Now, if we try that out with flow, our new process comment function looks like this.

This time, the data flows from top to bottom or left to right, depending on how you write it.

And we start with headalize and we move to emphasize, imagify.

So things happen still in the same order.

It's just the difference is the way we write them.

But if we were to write this out by hand, this is starting to look a lot more clean and simple than if we were doing all that composition with all those brackets.

So indeed I think flow is rather neat.

You can see how concise it is.

We've composed a function functions, and there's not a fat arrow insight.

We're treating functions as values, which is something a lot of people find hard to get used to, but it opens up a lot of possibilities.

And we're gonna talk about that more in a moment, but for now you can see that some people might find this rather pleasant to use.

And because it's pleasant to use we might find ourselves using it to build functions all over the place.

But if we only use some of these functions once we might get a little bit lazy and we start to invoke those functions immediately.

So for example, we might end up writing something like this.

If we just want a single processed comment, we build the function and then we call it straight away with some value.

And there's nothing terribly wrong with this, but it does look a little awkward.

The largest problem though, is that seeing immediately invoked functions makes some JavaScript developers a little bit nervous.

You see we've already taken away those comforting fat arrow functions, and then taking it a step further and having functions that sort of returned functions, which we call immediately.

It's all just gets a bit too much.

So maybe there's something we could do to help these folks out.

We do that with yet another composition function and it's called 'pipe'.

Now pipe works a little bit like flow, but we treat our parameters in the spread just slightly differently.

So here's how pipe looks in code.

I'm gonna flip back to flow for a moment so you can see the difference here's flow and here's pipe again and back to flow.

Notice how flow always returns a function.

You see, that's why we've got two fat arrows there, but in pipe there's just one fat arrow and pipe takes x0 as its first argument.

It's like we've shifted it, but because of that, we don't have to wait for the return function to be called with an initial value.

Instead, pipe gets started straight away.

It passes x0 through the list of functions.

And this means we don't have a function we can reuse, but we don't always need one of those.

So now to illustrate how pipe works, let's go onto the next step.

So we've got a pretty good function for processing an individual comment, but what if we want to process lots of comments?

Like suppose we have a big list of comment strings sitting inside an array.

Let's put together some code with pipe that will help process them.

But first, just to set things up, we're gonna introduce a few utility functions.

And the first lot of utility functions are for processing arrays.

They're fairly simple.

All they do is call array methods.

But notice that they all have two fat arrows.

That means that when we call them with the first parameter, we get another function back and that's important.

But they're all reasonably straightforward.

The map utility calls, array map, filter utility calls, array filter, and then so on.

The only one that's really any different is take and that's where we call array slice with a fixed starting point of zero.

Now while we're at it, let's introduce a few utility functions for strings too.

We have itemize, we're putting a list item wrapper around something.

We've got orderedListify for making an ordered list and chaoticListify for making an unordered list.

And finally, we've got a couple of functions to check that we are not mentioning any super secret company information in a public comment.

Now imagine we have an array of comment strings.

We wanna filter out any comments, grab the first 10, then we wanna run our processComment function from earlier on each of those 10.

Then we wanna format each comment as a list item and finally join everything together as a single string.

Doing that with pipe looks like this.

Now, if we squint a little bit, this sort of, kind of looks like chained method calls like these here.

And in fact, the way I formatted pipe, they look eerily similar.

Right?

Have a look again.

Here's the pipe version.

And here's the array method version.

Not much difference.

And at this point, some of you clever people are probably wanting to scream at me by now, because I said earlier, wouldn't it be really nice if we could have some kind of operator that would let us do composition.

And there is in fact, a TC39 proposal that looks kind of similar to our pipe function and it's even called the 'pipeline' operator.

So we could rewrite our code from above using the pipeline operator like, so.

There's a lot I could say about this, but for now, I'm just gonna point out that the pipeline syntax requires that extra hash symbol, which is a little ugly, but not the worst thing in the world.

And sadly it hasn't made it into browsers yet.

So I'm gonna focus on what we can do right now, which is use our pipe function.

And we were comparing it with the array method version.

And after comparing those two, you might be thinking to yourself, well, why bother with pipe?

Why not just use those array methods?

And that's a fair question.

Cause after all, with the array method chaining we don't need to add all these utility functions.

There's no extra overhead of trying to figure out what the pipe function actually does.

The array methods are familiar and they're built in.

There's something pipe can do that method chaining can't.

Pipe can keep going even when there's no methods to call.

For example, we can add our chaoticListify function from earlier to our pipeline.

If we wanted to, we could keep on adding functions to our pipeline.

It's possible to build up entire applications this way.

Now, compose, flow, and pipe.

They're neat.

They make for some concise code, but someone still might be thinking "well, that's nice, but so what, like, so you can write stuff in a pipeline, big deal.

What difference does that make to the code I'm writing day to day?" And that's a reasonable question.

After all, we don't need pipe, we can achieve things in other ways.

So for example, we can write equivalent code using variable assignments.

Here's one that does the same comment processing, does the same job, no trouble without any of that pipe business.

And for most people, this version is going to be familiar and easy to read.

So by that measure, the pipeline version is objectively worse.

So why would we bother with pipe?

Now to answer that, I want us to compare this version here with pipe, and I want you to notice, first of all, the number of semicolons.

Second of all, we didn't need any array utility functions here.

Now, if we look at those semicolons, we can see that there's six of them in this version.

And the pipe version has just one.

So what does that mean?

Well, it means that the variable assignment version is made up of six individual statements, but the pipe version is a little bit different.

It has one semicolon because there's only one statement.

It's a variable assignment and this is a key difference.

Now it might seem like I'm splitting hairs, after all the compose version still has a bunch of commas, keeping things separate, but there's a subtle, yet important difference here.

In the variable assignment version, we created six statements.

In the pipe version, we composed the entire thing as an expression, which we happen to assign to a variable.

And again, you might ask "well so what?.

Who cares?" And in one sense there's no difference.

The two pieces of code still do the same thing.

They produce the same results.

The performance is roughly the same, but something that does change is the meaning of what we're doing.

Now to make this clearer.

What I'm gonna do is I'm gonna convert this code back to producing a function rather than a value.

So if we switch back to using flow, we now have a function called 'processComments'.

And if we wanna update the variable assignment version, it's a function as well.

We just wrapped it in a fat arrow and some curly braces.

And it's now a function.

But what changes now that we've done this, the difference between the two is, what that equals sign means.

So let's have a look.

In the variable assignment version, the equals sign says, processComments means run this set of steps in order and at each step of the way store the result in a named variable.

In the flow version though, the equals sign says that processComments is the composition of this list of functions.

So we're defining 'processComments' as a relationship between functions, not a series of steps.

And that difference there, that's, that's a big deal.

Now it doesn't change the instructions we send to the CPU, much.

These two pieces of code still do essentially the same thing.

But this idea, this writing code is a set of relationships between expressions, that is a big deal because it's not about what changes in the code, it's about what changes in us.

Writing code this way changes the way we think about the code.

In composition it encourages us to think about code as relationships between expressions and this in turn encourages, focus on our desired outcome, rather than thinking about each detailed step.

And as a result, our code becomes more declarative.

But based on what we've seen so far, that might not be so obvious.

So we've written the same piece of code two ways.

You might be thinking well, potato, potato, but I can prove that the flow version is more declarative because we can make it more efficient without changing a single character of that function.

What we're gonna do instead is we're gonna change some of those array helper functions.

So first we'll redefine map to use generators.

And if you're not too familiar with generators, don't worry too much about how it works.

Just notice that it's still a simple single line function and it's just yielding instead of returning a value.

And we'll redefine fields very much the same way.

Again, we're using generators, it's a little longer, but not much longer.

And we'll give, 'take' the generator treatment.

Now this one, this does get a bit, little bit more complex, but all it's doing it's keeping a count and it stops yielding values once it reaches the specified limit.

And finally we redefine 'join', which again is a single line function.

Now we don't change anything in our flow based processComments function, but we've changed the way this function works.

So suppose we had 1000 comments to process.

In our original version we would've had to run that `noNazi` function on every single comment.

But with this generator version, if there's no comments that mention Nazis, it will only run on the first 10 comments.

So what's going on is that by using generators, we can now run each comment through all the functions, without creating any interstitial arrays.

And we stop once we've processed 10 of them.

Which means with generators, we're now using less memory, spending less time allocating and deallocating, which in theory will make the whole thing more efficient.

But the point isn't the performance gain though.

So, no, I haven't benchmarked it to see how fast or slow it is, 'cause I don't care.

The point here is that we can do this and it might be significantly faster.

Now.

Sure.

I'll admit there's no reason we can't write a version of this function that used generators with assignment variable assignments, like the one we did before, and we get the same results.

That's true.

The point is we are less likely to do so because writing code as a series of statements doesn't really encourage that way of thinking.

Remember that I asked you to take note of where we used utility functions in the this variable assignment version versus the composition version?

We can switch to using generators without changing the code in the pipeline version because using composition encourages us to use a bunch of utility functions.

And those utility functions allowed us to switch out the implementation without changing the API.

We defined our pipeline as a set of relationships between functions and to do that, we needed those reusable utility functions.

In domain-driven design terms, those functions created a natural anti-corruption layer.

This let us change the implementation details without altering the high level intent.

We've separated, what we want to do from how the computer does it.

And this, this is why function composition is kind of a big deal.

At its core, function composition isn't all that complicated.

Combining functions with composition is straightforward and fairly easy to understand.

We can take that core idea and we can extend it out, such that we can compose whole lists of functions all at once.

That's how we get 'compose', 'flow' and 'pipe'.

And we can use these functions to create concise, elegant code.

The real beauty of composition isn't how it changes the code.

It's about how it changes us.

It changes how we think about the code, allowing us to think about code as a set of relationships between expressions, rather than a set of steps that must be executed in a given order.

And that opens up a whole new way of organizing and optimizing our code.

With that, I'll stop rabbiting on and let you get on wth the rest of the conference..

Developers spend up to half their time debugging software.

JavaScript frameworks, where many of us live these days add to the debugging complexity, but many developers don't have an effective process or understanding of debugging tools.

Today, Cecelia Martinez, community lead at Replay outlines how to approach debugging apps, but with JavaScript frameworks like React, Angular and Vue.

Hello, I'm here today to talk to you about debugging applications built with JavaScript frameworks, and I'm here to talk to you about debugging because like probably most of you, I wasn't always very good at it.

You know, this was very much me.

Most people are not taught how to debug.

Nobody ever sat me down and said, oh, this is how you use debugging tools, or this is how you should approach a problem whenever you encounter an issue with your software.

And so I ended up doing what a lot of developers do and use what I like to call the debugging dartboard.

Right?

So you may kind of throw spaghetti on the wall.

See what sticks, do a lot of trial and error where maybe you change one line of code at a time, or you just kind of comment things out in order to see what happens.

And then eventually hope that you stumble upon an actual solution.

And this isn't great because developers actually spend up to half their time debugging and maintaining software.

We talk about how developers are more like mechanics just kind of trying to keep the engine running versus architects who can build out new features and functionality for our users.

So we spend all this time debugging, but we aren't taught how to do it, especially not how to do it effectively.

And the issue here is that debugging is complex because our applications are complex.

So if you think about your application and you think about clicking a button.

A lot of things happen whenever your user clicks that button.

So you may have an event listener, then an event handler, you may set a loading state in your store then you have an API request that can go out to your backend, which has a request handler, which may then do some business logic, update the database to say, you know, whether or not an, an item is now sold.

And then you need to send the response back up to the front end, which then handles the response.

Maybe it updates your front end store again, sets that loading state back to false, and then finally updates the DOM with whatever new information was received.

So.

There's a lot of things that happen from a single button click.

So if your user clicks that button and something goes wrong, or it doesn't work the way that it's expected, then there's all these potential steps that could have the problem.

You have to investigate potentially all of these steps in order to debug.

So that's an application.

Now frameworks add even more complexity and whether you're building with React, Angular, Vue or something else, you know, frameworks make our life a lot easier because they give us these tools to develop applications, but they also do add complexity, which can make it more difficult to debug.

So again, if you think about all of those steps that your application goes through, in reality, it looks more like this because you have things like event emitters, maybe you're dispatching actions to a front end store.

There's life cycle hooks that are triggering in between.

And all of this is, you know, you're writing this code to interact with the framework, and again, these are all steps at which something could go wrong that you then need to debug.

So the good news is that we can fight this complexity by identifying patterns within frameworks.

We already do this with JavaScript.

There are patterns within JavaScript that we learn and then we're able to use when we're debugging.

So let's take a look at a pattern that most of us know.

So type error, "x" is not a function.

This is an error that most of you have probably seen quite a few times.

I know I have you know, I write buggy code all the time, like most software developers.

And you may already be thinking of some of the reasons that this error could occur.

You know, things like, for example, a typo in the function name, X is not a function because it's spelled wrong.

It could be the wrong prototype.

Maybe you're calling a function on a string that really is a prototype function for an array, for example.

It could be that the variable that you're calling the function on is the wrong type.

So say you do mean to call that string method, but for some reason, the variable that you're calling it on is actually an array.

So that could be another issue.

And then also it could be a missed import.

You know, maybe X is not a function because you never actually imported X before calling it in your file.

So again, if you were already thinking of some of these potential solutions for that error or potential causes, I should say for that error the reason that is because you've been able to identify those patterns through experience with debugging these issues.

So we can do the same thing with frameworks, and we can do that by using tools that are specific to framework debugging, to better understand how our application interacts with our framework.

So let's take a look at some of those tools.

First is framework dev tools.

And so again, whether you're using React, Angular or Vue, there is a dev tools for you.

And so all of the three major frameworks do come with a dev tools.

There is also Redux dev tools.

So despite the name, redux dev tools can actually be used for a variety of state management frameworks, including Vuex NgRX and MobX.

And so if you are using that same pattern where you're dispatching actions then you can leverage Redux dev tools.

All of these dev tools essentially do function in similar ways.

So you typically that can be used in a browser or your IDE, your text editor, your development environment.

And then they come with kind of two categories of functionality.

The first is inspecting.

So this may be the component or in Vue, it's called the inspector tab, which allows you to take a look at things like elements, your component's state, and values, and further its props, and then also evaluating.

So this is understanding the execution, what is triggering renders and perform.

And these tools are specific to the frameworks that they support.

But again, they do have similar patterns.

So let's go ahead and take a look at an example.

This is an application it's a Vue application built with Vue Resource Library.

And this is a video of interacting with the application and we can see on the right hand side, the inspector tab is open in view dev tools.

And again, if you're in React or in Angular this is gonna be called your components tab.

What this does is it allows you to inspect the components and your application to be able to view state and props or, in view, you know, things values that are inside your setup function as you're interacting with your application.

So here as I'm, you know, typing in my form and as I'm selecting a type I can see on the right hand side, my, you know, my error message updates, my search input updates.

So I can see that live reaction to the user interface updating via the dev tools on the right hand side.

And this is again the inspector or the component tab, and it allows you to be able to inspect those types of elements within your application.

Then there is the timeline or again, in React and Angular, it's called profiler.

This allows you to understand the events in your application and then specifically for React and Angular as well, the change detection events.

So understanding when your components render and why.

It does allow you to record essentially a flow of your application in order to then understand the performance that took place and evaluate that within that recording.

And then you can also see all the individual, again, methods that are being called from the framework within the dev tools.

So here in this video, I am essentially interacting.

I'm uploading a component.

And when I do that, I can see every single one of my component cards re-renders.

And so even though I'm only interacting with one card, all of my components are re-rendering and that could be potentially a bug or a performance issue.

And that's something that I'm able to identify with the pro the profiler or the timeline tab.

So then we also have kind of these, our, our regular dev tools, which is now here in the Chrome dev tools.

I am using a debugger, which I've used to essentially lock in on a point in time in my code execution.

So I'm using a Vue method called onRenderTrigger in order to stop the application, whenever a render is triggered so that I can then evaluate what is causing that render to occur.

So if there are more renders taking place than I expect, or that should be happening, I can go in using these methods provided by the framework to evaluate what is causing that to occur.

So this is a function that's provided by Vue.

And then there are other functions in React and Angular that could allow you to tap into certain functionality in the framework.

So you can understand how your framework is reacting, [laughs] no pun intended, to user interaction in your application.

While it's extremely helpful to leverage framework dev tools in order to inspect and evaluate your code and also understand how your application is interacting with your framework, there can be some difficulty because it is a little bit like catching lightning in a bottle.

Developer tools are evaluating live.

As we saw every time that I would type in an input, my dev tools would update.

The breakpoint has to catch that moment of time and execution.

So many developers will end up if they don't know where the problem is occurring, will add break points or add random console logs to different lines and try different things manually, in order to try and catch the bug that's happening.

You know, you have to reproduce it every single time.

You may have to make a change to your code, save that, reload your server in order to see that new console log.

And if it evaluated the way that you expected.

And all of that becomes a little bit of a tedious debugging process because everything is happening live and you're having to reproduce as you go.

So again, it can kind of start to feel like that dartboard again, where you're adding break points or adding logs across different points in your execution, hoping for the best.

So again, going back to all of the events that take place once we click on our button, we talked about that there's not only the, the things that are happening from the framework, but there's also the framework itself.

So there's underlying framework code that is executing between each of these component render.

You know, like we saw there's the triggers, there's the patches there's watchers that are, you know, keeping an eye out for state changes.

There's things that are being you know, deciding whether or not the component is gonna bail out before it actually rerenders or not based on a state change.

All of this happens in between every single piece of code that you use writing the framework and you know, what happens if you need to debug something in the framework itself, or if you need to really understand why the framework is reacting to your code in a certain way?

Or what if you just want to be able to pause at any point in time without having to reload your server, and reproduce it every single time?

So you could use something like Replay.

So Replay lets you record your application to create a shareable recording with built in debugging tools.

You know, at its core Replay is a runtime recorder, the Replay browser can be used to record manually, or you can also record automated test execution.

The replay itself, isn't just the video.

It actually captures everything that's happening in the browser including your HTML elements, your JavaScript execution, network requests, user events at each and every moment in time so that you can then replay your application execution to debug.

So let's take a look at an example.

Here we have our viewer, which is playing a recording or a replay of the Vue Resource Library that we took a look at.

And I can see all of my click events.

I can see my key presses.

I can navigate between them in order to go through time.

In my dev tools tab, I'm going to get access to all the debugging tools that I'm familiar with from browser dev tools and also from framework dev tools.

So in the developer tools tab, you'll have things that you're familiar with.

Like being able to inspect your sources.

You'll have your console evaluations you'll have your React dev tools to be ordered in order see your components.

And then you'll also have network requests, call stack, scopes, everything that you kind of would expect from a debugging experience.

In addition, you can also see how your code executed during the recording.

Here, I'm taking a look at view source code, and I can see on the left hand side, a function outline that shows me how many times each function was called during the replay.

So this can help me see, for example, if I'm expecting a function to call and it doesn't to be be able to dig in why or if, for example, a render function is being called way more than it should be, that can help point me in the right direction for my debugging.

The real magic of replay comes though in the ability to add print statements or console logs retroactively to the recording.

So instead of having to add console.log here, console.log here, console.log here.

You know, again, reload your application, save it.

You can add a console log to the recording and then log it to the console as if that already existed at the point in time, the recording was created.

So here in this video, I am essentially, I'm adding a console log to a line of code.

And I'm outputting the tag and the props every time an element is created.

I can then navigate through every execution or call of this line and see what the value was at that point in time.

I can fast forward.

I can rewind.

I can see the DOM update in reaction to those navigation points and again, see what the values were at that point in time.

And none of this requires me to reload my application or go back and find that version of the code.

Everything is done within the replay itself.

And lets you kind of essentially go back in time to that moment, and add a print statement to see what the the value would have been when that application ran.

So again, by using tools and no matter what tools that you use the point is to use them effectively to identify patterns.

And so I had to say that maybe the real treasure was the bugs we fixed along the way.

Right?

So every time that you debug you not only better understand debugging, but you also better understand your application and its interaction with the framework used to build it.

And this helps us to identify patterns, right?

So patterns like component rendering, you know what triggers a render.

What doesn't trigger a render.

These are things that you can identify by better understanding your application and framework relationship.

Also things like state some really common patterns are having multiple sources of truth.

Maybe you have state being managed in a component, but then also in a global level, and they may be competing or have different values at different points in time due to some like a race condition or an accidental mutation.

There's also error handling.

So, you know, understanding, do you have silent errors in your application?

If you click the button, then nothing happens including no output that makes it much harder to debug.

Additionally, understanding how you pass errors through your application and how you pass them you know, potentially through components or also through different function calls.

These are patterns that can cause a lot of bugs when you're using frameworks and being able to understand those and identify them will help you get better at debugging over time.

But essentially the point is to, you know, embrace bugs intentionally with a learning mindset, don't be afraid of debugging and because it is really an opportunity to learn and to identify these patterns.

You know, throw out the dartboard you're gonna spend half your time debugging anyway.

So you might as well enjoy it and get better at it.

Thank you so much.

And happy debugging.

Thanks James and Cecilia for these great presentations.

We'll take a quick break right now and return to close out Global Scope in about 10 minutes or so.

Welcome back to our final session of Global Scope for 2022.

We'll begin by taking a look at modules in JavaScript, across the front and back end, then we'll wrap up with two forward looking presentations, thinking about the place of JavaScript in an increasingly multi core world and how JavaScript is increasingly running everywhere-the browser, the server embedded devices, and now on the edge.

Luca Casonato is a developer, TC39 delegate who works at Deno.

Today, he'll walk us through writing a module in TypeScript that can be consumed by users of Deno, Node and in browsers.

Hey folks.

I'm Luca Casonato,I'm gonna be giving a talk today on writing universal libraries for Deno, Node, and the browser.

And what that's gonna entail is that we're gonna write a little library.

We're gonna run in Deno.

We're gonna run in Node.

We're gonna see if it runs in the browser.

All from a single code base using Deno's builtin tooling.

Before we get started with that, let me introduce myself.

I'm Luca.

I work at the Deno company as a software engineer.

I primarily work on the open source Deno CLI.

So that is the run time that you can download yourself and execute on your own computer.

I also work on Demo Deploy, which is our edge compute offering.

It's the the run time where you can deploy your Deno code to run it anywhere, close to your users on the cloud.

Additionally I work on Fresh, which is a Web framework for Deno, a full stack framework, which released just a few weeks ago.

And if you wanna learn more about that, you can go to fresh.deno.dev.

Additionally, to, to my work at the Deno company, I am also a, well actually, I also do this as my work at the company is that I do a bunch of Web standards work at TC39, which is the standards committee that standardized JavaScript, but also at W3C and WHATWG.

For yeah, like Web specs, like fetch and HTML spec, things like that.

And I'm involved in all of them obviously, but there's some of them that I work on.

So with that outta the way we're gonna be talking about Deno today.

So it's only fair if I give you a little bit of a recap on what Deno is.

Deno is a modern run time for JavaScript and TypeScript which has a bunch of key selling points that other serverside runtimes, don't.

One of them being that, you know, very closely follows Web standard.

So we support fetch we've supported fetch for over four years now longer than pretty much any other server side run time.

Import maps are a ?? resolution system that browsers have and Deno has to.

We support ECMAscript modules are our way of doing Yeah, modules in JavaScript, right?

We don't have support commonJS.

We use the standards here Web workers for threading Web streams for streams, promises, for async operations, you name it.

We, we try to stick very closely to the Web standards.

We also have a bunch of built in utilities because you know, it's not just a runtime, but it's a fully integrated JavaScript tool chain.

It has a linter built in, a formatter, test framework, benchmarking framework, editor integration, which works with essentially every editor under the sun through the language server protocol.

A documentation generation, Deno compile.

There's a bunch of stuff in there that I didn't name.

But there's a lot, there's a bunch of built in utilities other than the runtime.

Going back to the run time though, Deno's runtime is secure by default, just like the browser, so you can execute random code in it.

And it's very likely to not break your computer.

And the way we do this is by disallowing any IO permissions like network access or file access by default.

And instead requiring that users explicitly opt into them either through a prompt or through a flag that you specify on startup.

Deno also supports TypeScript out of the box.

We can transparently run your dot TS and dot TSX files without you having to compiler them.

Deno's a single executable that you download.

And it does not contain any like DLLs, does not need to dynamically link to open SSL.

None of that.

And Deno also has very extensive standard library, which is a library written in TypeScript that is shipped together with Deno, which has a bunch of common functionality that you would usually have to pull in NPM modules for, to use.

But in Deno, you don't, they're maintain these, these standard library modules are maintained by the same group of people that build the Deno runtime are tested by by us.

And yeah, are insured to work with Deno.

And it encapsulates much of the functionality of the top 100 NPM modules.

So now that that's outta the way let's actually talk about quickly what we're gonna actually be doing today.

So we're gonna do a bit of live programming.

Let's hope the, it all works out.

We're gonna be exploring how easy it is to build stuff with and for, Deno.

What do you mean by that?

I mean, we're gonna try to just build a library.

Like just build it and see what all the Deno tooling is that we can use and see where we need to maybe reach outside of the Deno project, if we even need to do that.

Our library is gonna be very simple.

It's gonna just create a greeting messages.

Get to that in a second.

We're gonna do some unit testing using build, using Deno's built in testing, testing framework.

We're gonna format and lint the code using built using Deno's builtin formatter and linter.

We're gonna do or get automatically generated documentation through Deno doc, our documentation generator or the, the doc website at doc.deno.land-get to that later as well.

And then so far, this has all been Deno, but I promise that this is a universal library that can run in both Deno and Node and the browser.

But to be able to run stuff in Node, you know, you probably have, you have to publish things to NPM, right.

And Deno does not use NPM by default.

So we have to somehow get our code onto NPM and we also probably wanna make sure that the code that we publish to NPM actually works in the, in Node.

So we wanna test the code in Node.

We'll get to all of that.

Okay, so let's get started with the library itself.

It's gonna be very simple.

It exports a single greet function, which takes a name and a greeting and returns, a sentence, which links the name and the greeting together in some cohesive form.

And you can specify any name you want.

And the greeting has to be one of the greetings inside of a TypeScript enum which you can pass as the second argument.

So let's actually get started with programming.

For programming today we're gonna use VSCode 'cause it's just the editor I like using.

You can use any editor you want, you can go to the Deno manual to figure out how to setup Deno for your editor.

I'm gonna set up the project structure for a Deno project.

You'll see that that's actually not like there's not much to it.

And we're gonna write the actual library code here, and then we're gonna quickly check that the library's working before we start writing tests.

Okay.

So first thing we have opened VScode we're in a new project or a new folder.

First thing we wanna make sure is you have the Deno extension installed.

So you go to extension pane, type in 'deno', install the Deno extension.

And the next thing you wanna do is you want to open the command palette, so you press F1 and then you press 'initialize workspace configuration'.

You get a few questions you can answer.

Yes.

Here.

We don't need unstable APIs, and then it's gonna generate this VSCode folder with a settings JSON file, which just says that this workspace has Deno enabled.

Yeah.

And this tells Deno that this is now a, or this tells VSCode that this is now Deno workspace.

Next thing we can do is set up the actual files we need to for the project.

Deno projects do not require configuration file or manifest file or anything like that.

So the first file we're gonna create in this project is the file that we're gonna write our source code in, which is very nice.

No, no boiler plate files.

Right.

Why is this called mod.ts?

You might be familiar with Node where in Node index files treated called index.js.

And in Node they have like special resolution.

So that index index files are treated differently to other files.

Deno does not have any of this.

And to make clear that that's, we don't have that, we name our index files mod.ts,.

following the Rust convention.

Not a special name this, you can name this, whatever you want.

Now that we've created our mod.ts file, let's actually start putting some code into it.

So, first thing we're gonna do is put in the TypeScript enum for the greetings, and these are all the different greetings your user will be able to specify: "hello", "hi" and "good evening".

Those are the different greetings we're gonna support.

And then now that we have the greetings, we're gonna add the code to actually take those greetings and turn them into turn them into like a greeting string with the name in it.

So this greet function takes a name, it takes a greeting.

If you don't specify greeting, it'll default to the "hello" greeting and it'll return a string, which is the greeting, plus the name plus an exclamation mark, all concatenated together.

So this is all, this is the source code we need.

But to be able to later get nice documentation generation, what we're gonna do is we're going to add a JSDoc comment to our greet function, which will you'll you'll actually already see this in the editor here.

It'll be used to give better autocompletion and to it'll show up on the documentation automatically generated documentation later.

This is our entire library.

It's now completely written.

It's done.

What we're gonna do now is we're gonna run the Deno REPL here.

So just type in 'deno', you're gonna get to ??? Loop and then we can do "import greet from mod.ts", and you can see, I can just import a dot TS file.

And then I can call this greet function with my name and it'll return the greeting.

Okay, cool.

So the function is working library seems to be working, but how do we ensure it continues to work?

That's where testing comes in, right?

Any project any project that you care about, you probably wanna have tests for and, Deno makes it really easy to write tests through our 'deno test' built in test runner.

It has really simple interface, but it allows for very advanced capabilities.

So you can start out very easy, but if you wanna do advanced things like snapshot testing or BDD style testing or any number of other things, those are all supported.

But you can start out really easily.

It's very nicely integrated.

You'll see all of that.

So what we're gonna do next?

We're gonna write a test for the greet function, and I'm gonna show you how nicely VSCode integrates with Deno's testing system and then how you can run and debug your tests directly from VSCode.

Okay.

So test files in Deno usually end with underscore test TS, any file that ends with underscore test JS or Ts or JSX or whatever is gonna be automatically picked up by the Deno test runner.

Files that are not named underscore tests, you can still put tests in them, but you need to specify them manually when you're invoking deno test.

So let's actually write the test here.

So the first test we're gonna do, actually, first thing we're gonna do is we're just gonna import, greet and greeting from mod.ts because this, this is what we're gonna be testing.

And then we can start writing our test.

So to declare a new test, you type in 'deno test', first argument of this thing is the name of the test.

So let's do our first test that we test the default value of the second argument to make sure that you don't have to specify a greeting parameter.

But it'll, it'll default to something.

To do this, we'll do here.

Const greeting is equal to greet, and then we're gonna call this Node or sorry.

Oh boy.

gLobalscope.

There we go.

And then we wanna assert equals that greeting is equal to "hello global scope!".

Okay.

You'll see, assert equals is not actually imported.

Let's fix that.

So assert equals is a function from the Deno standard library, which we need to explicitly import.

We can do that using this URL import here, Deno imports from URLs by default.

And yeah, so our standard library is hosted at deno.land/std.

I specify the latest version and, we import the 'asserts' module from the testing library.

So that's our Deno test.

What we're gonna do next is we're gonna add a test where we explicitly specify a greeting.

In this case, let's specify the "hi" greeting to make sure that that works.

And then we're gonna add another test that checks that the "good evening" greeting is also working.

So how do we now run these tests?

First way we can run the test is by just typing 'deno test'.

And it'll find all the test falls in our project and run them and you can see-ran, found three tests.

It ran all three tests.

They all passed.

No failures.

That's great.

Second way of testing.

We can use this little green button here which comes from VSCode and the Deno extension.

You can click that.

It'll run this specific test.

So it ran the default test.

I can also run the "hi" test, the "good evening test" and they all pass.

I can also run all the tests on the project if I click this button and you can see this is actually running the test.

If I, if I change the test here to fail, Deno test will fail here with an error that the assertion does not match.

And if I run the test using VSCode, it will also say the assertion doesn't match and it'll put the assertion error right in the right place here in my file, so I can debug.

Let's fix that.

Run the tests again, and they all pass.

Fantastic.

Okay.

So we now have a project which we have the library and it's tested and we manage to test using the VSCode integration and using the Deno test command.

Next thing we're gonna do is format and lint our project.

Formatting ensures that there's a consistent style across the, across a, this project and also across all of the different projects that use Deno, which is very nice if you're, for example, contributing to an open source project, you don't know if they use prettier or if they use like some other formatting system.

If they're a Deno project, they probably use 'deno fmt', and if you run 'deno fmt' in a project, it'll just work.

It won't like mess up all the styles and put tabs instead of spaces and spaces instead of tabs and whatever else.

It'll just be the default consistent, Deno style.

It's very similar to prettier, but it's written in rust, so it's much faster.

It's, I don't know, orders of magnitude faster and it can format JavaScript, it can form a TypeScript, JSON, markdown, and we're always working on other formatters as well for it.

In addition to the formatter, we also have a linter, linters are things that don't check for formatting mistakes so when you put like a semicolon in the, there, when you're not meant to, or when, or use single quotes instead of double quotes or something like that.

But instead the linter checks things that are logic errors.

So for example, are you comparing to an NAN value, not a number value, cuz that's something that's illegal in JavaScript, catch these kinds of issues.

So it enforces yeah, that you don't have logic errors.

This 'enforces styling' thing is wrong.

Should be up here.

Bug in my slides.

Sorry for that.

Okay.

So let's run deno fmt, and deno lint, and also show you how these integrate with the editor.

So 'deno fmt', easy as that which this will find all the TypeScript or JavaScript or JSON or markdown files in my project.

And it'll format them three files because I have these two files and this is a JSON file that also formatted that.

Also run Deno lint.

Which checks just my JavaScript file for logic errors.

As you can see, there's no formatting or logic errors.

The reason for that is because this is all directly integrated into VSCode as well.

Like my formatter is set to Deno which means that if I do this and then I say 'format document', it'll all format it as to deno fmt's preferences.

And if I do this here with deno fmt, you'll see, that also works.

So I don't have to use the editor.

Same works for the linter.

And if I put in a logic error here, for example, I'm comparing 12 is equal to 12.

Let's see if 12 is equal to 12 foo, you can see that demo lint wins because this is probably a logic error, you don't wanna be comparing two constants.

This isn't doesn't make sense.

It's always gonna return true.

Right.

And deno lint in the terminal also catches this and displays the error.

Yeah, and then you can fix this.

And if you fix it, the lint error goes away away.

This is the, let's see, there we go.

This is this gonna, okay.

Now TypeScript's complaining, whatever, if you change this to well, it's also gonna complain about, I, I don't know how to make it stop complaining about this.

This is just a terrible if statement.

Don't use those statements like this people.

If I get rid of the if statement and deno lint again, the error goes away.

It goes away from my editor as.

Okay.

Next slide.

So we have a module now.

It's tested, it's formatted it's linted.

We wanna publish it for our Deno users to consume.

Deno has a first party registry called, or a Deno first module registry.

Sorry.

It's not a first party registry, at Deno land.

Which allows you to publish deno modules there and people can import them.

It directly integrates with GitHub the versions are immutable, so they can't be changed after the fact which prevents things like spoofing of, of dependencies or, or malicious code injection.

But it's not a blessed registry, right?

Deno inports from URLs, which means it can imort from any URL.

You can also import from your own local Web or your own web Server, you can host your modules anywhere, even on your own domain.

It doesn't need to be deno.land/x, but deno.land/x is pretty nice, I think.

And if you wanna learn more about how to publish too, you can go deno.land/x and, and press the links on screen there.

We're not gonna go through right now because we don't have that much time.

But what we're gonna do instead is look at our first class documentation generation.

So Deno has a built in documentation generator called 'deno doc' that I can use to get documentation for a specific module.

For example, I wanna get documentation for this mod.ts file.

I can do that by typing in Deno doc and then the file that I want documentation for, and it's gonna print out all the definitions in that file that are exported.

So for example, the greeting enum with all the different variants and the greet function.

And it's also gonna print out the JSDoc here for me to consume.

So yeah, that's very nice.

If I wanna, for example, look at the documentation of the assert modules I can type in 'deno doc asserts' and it'll return all of the different functions in here.

If I look for 'assert equals' where is it?

There it is.

You can see that this is the signature for certain equals.

And then this is a little example of how to use it.

And the document.

This also exists online in the Web.

You can go to Deno dot or sorry, doc.deno.land and type in a URL to look at, I'm gonna look at this assert module again.

And you can see here, it has all the same things.

If I want to dig into asset equals, where was it?

Equals, there we go.

I can click on that.

That'll give me nice page here again with the signature and the Doc comment, all the different parameters.

It has everything like that.

And even a link where I can just copy to import that.

That's documentation iteration.

This works for any URL on the internet which is a valid ES module and has TypeScript or sorry, JSDoc comments.

It works especially, well, if you have TypeScript annotations in your code, but yeah, this doesn't, you can't just use this for Deno, but you can also use this for things like for get, which is a module that's published to NPM.

If we import this by esm.sh, you can see that we also get documentation for this.

Preeact doesn't have very nice comments on its files.

Actually, maybe a better example is FireStore here.

Give it a second to load and you can see here, this is the the firestore module from NPM all documented by Deno doc.

And yeah, you can click on this and, and view all the different methods on this class and everything like that.

So that's Deno doc.

It's also gonna power our upcoming global symbol search in on the Deno website.

Gonna learn more about that in the future.

I might give a talk on that because it's really cool, actually.

And it's also powers your inline documentation at deno.land/std and deno.land/x.

So before we're done here, let's actually publish to Node.

So we've used a bunch of Deno here.

Deno supports loading TypeScript modules, natively, but Node and the browser do not, which means that to be able to publish or to be able to use these modules from Node, we need to emit dot JS files.

Node consumes, yeah, Node can only consume JS files, right?

So we have to, we have to do this emit.

Additionally, Node can only import packages from NPM, or you usually only Input packages from NPM in Node, which means if we want to have any chance of our module, actually being used by developers that are using Node, we need to publish our module to NPM.

I'm gonna show you how to do that.

Also, Node does not support the same level of Web API's that browser search and Deno do, which means that sometimes if you use a given Web API in your project, which isn't available in Node, you'll need to polyfill it for your Node users.

That's some, that's a consideration that you need to take into account.

To solve all these problems, we have a tool called DNT the Deno Node Transform, which is a Deno project tool.

And it does automatic transpilation from your Deno source code to common JS and pure JavaScript ESM modules that you can distribute on ESM or on NPM, right.

It can automatically replace globals that are not available in Node with polyfills.

It can transpile and one really cool feature is that it can actually transpile your Deno test tests.

To, to your Node distributable and then run them in Node to ensure that your code doesn't just work in Deno, but that the transpiled code actually also works in Node.

This gives you the best of both worlds.

You can develop using all the built-in Deno tooling the linter, formatter, testing framework, stuff like that, and the editor integration.

But you can still make your module available to the large number of users that are currently using Node.

Right.

So let's see how to do that to do that, we're gonna need to set up a little bit of a build script because we, we need some place to transform our code.

We're gonna then run our tests in Node and then publish it NPM.

So first thing we're gonna do is create a build script.

I'm just gonna call this build dot TS.

This is what we're gonna run to build a project.

First thing we're gonna do is import the build function and this utility function from the DNT module on deno.land/x.

Then we're gonna, oh, then we're gonna empty the NPM directory, if that exists.

The NPM directory is the file, is the folder we're gonna emit to.

So if we wanna run this multiple times, we don't want the files to override each other and conflict.

So what we're gonna do is we're gonna delete the NPM folder first, then recreate it and put all the distribute, like all the output files in there.

What we're gonna do then is copy over the build function here.

So this, we need to specify some, this is the thing that's actually gonna invoke DNT and tell it to build the project.

We need to specify the entry points for our library.

In this case it's gonna be mod.ts because that's our entry point.

We need to specify the directory we want to emit the the transpiled code into which is gonna be the NPM directory.

And then we can specify something called shims.

Shims are things that are, are these polyfils right.

So if we're gonna run tests in Node, we need to polyfill the Deno test command for Node because Node does not have the Deno test command or Deno test function.

So we're gonna tell it that for tests.

So for development mode, we're gonna emit, or we're gonna shim the deno.land space.

And then we can just specify what we want our package JSON, to look like.

So the name is gonna be lucacasonato/greeter on NPM.

The version I'm gonna take as the first argument from the command line arguments that I'm gonna pass in here.

And the description, a demo project for my scope talk and the licenses is MIT.

What are we gonna do next?

Let's actually copy over our readme as well here.

So just to make that nice.

And then in our little build script also ensure that after we've done, we're done with the build, we copy over the readme into the NPM directory.

So it also gets published.

We would then call 'deno run - A _build.ts, and then this is gonna run DNT.

It's gonna run NPM install to install the NPM directories or all the NPM files.

It's gonna build the project.

It's gonna type check the project in Node.

It's gonna emit the TypeScript declaration files, ESM package, the common JS package, it's gonna then run tests.

These tests are let's.

Yeah, so these are, are tests here that we ran, that we wrote earlier in mod.ts file, This time they're running inside of Node, they're not inside of Deno with our emitted cod it runs them twice once against the commonJS output once against the ESM output for Node.

Just to ensure everything is working.

And then once that's done, it says complete, it copies over the read me file.

And then we have this nice NPM directory here with our ESM folder, which is our ESM version of the package, our commonJS version of the package the types and our file that we use to, to create the test or to run the tests.

Can then CD into this directory call NPM, actually, let me confirm that I did not put a version in there.

0.1.2.

Sorry.

Let's run it again quickly with the version number this time.

Okay.

Ran the tests again, we're gonna CD into the NPM directory, call NPM run, NPM publish.

That's gonna require a one time passcode, which is that that's published.

And now if we go to NPM let's just go there and where is it?

And there we go-published a few seconds ago, 0.1.2, and we can use this Node now.

Don't have enough time.

So I'm not gonna actually showcase this, but trust me, you can.

Back to the slides.

Final slide.

So what should we do just now?

We wrote a library written in TypeScript that runs in Deno, Node, and the browser.

We added and ran tests.

We set up linting and formatting.

We published deno.land/x and to NPM and we actually needed no tooling outside of what the Deno project provides.

You wanna get started yourself?

That's cool.

You can install Deno from deno.land from the Deno land website.

Follow all the links in here, the manual, the examples of where to install DNT, the example repo and you can deploy your code to the edge with our deno deploy runtime, deno.com/deploy.

Thanks for listening to my talk and I'll see you all soon.

For decades, computers drove Moore's law by making bigger and faster CPUs.

This pattern has changed with CPUs adding more and more cores, but not necessarily running all that much faster year on year.

So how does a single threaded language like JavaScript keep up?

Ujjwal Sharma is a compilers hacker at Igalia, serves as a co-chair of TC39 and a NodeJS core collaborator.

Ujjwal asks, where does JavaScript stand in the new world?

And how can it adapt?

Hello, and welcome to my presentation about multicore JavaScript.

The past, present and future I today during the course of this presentation, I will try to give you some insight into JavaScript.

What I mean is that like many other beautiful things, JavaScript is constantly evolving and it's, quite normal to feel lost.

It's quite normal to look at all these things that are happening within the world of JavaScript in complete isolation and feel, intrigued about why things are happening a certain way.

But, at the same time, I feel that it's quite easy to lose the bigger picture which is why are these things happening and what are the different trends and, the different directions that the language is evolving into?

I cannot, of course go through all of it.

But I, will like to explore through the course of this presentation one of these trends, which as I said is multicore JavaScript.

Before we begin, however, I would like to take a moment to congratulate myself.

I am your presenter.

My name is Ujjwal Sharma, but I'm commonly known on the internet as @ryzokuken, and you can find me on Twitter or GitHub or whatever social media platform you prefer.

I'd probably be there by that username feel free to berate me or you know, send me questions.

I.

I'm a compiler hacker at Igalia.

If you don't know about Igalia we are a free software consultancy.

So over there I work on compilers and programming languages.

And as you might have guessed JavaScript, mostly I am also a co-chair person of TC39.

So that's this top of my list of, some of the most descriptive names ever.

But if you don't know what TC39 is TC39 is a technical committee, the 39th technical committee of the European computers manufacturers association.

Now that doesn't do anything either It is a committee of, people who are invested in JavaScript, like yours truly and, hopefully you, and like to evolve the, programming language and, work on the standards around it.

So we work on new features around the language around testing of, different engines and, browsers and so on.

We, I am also one of the co-edit editor of ECMA 402.

This is the ECMAscript.

This is the ECMA script, internationalization API.

So, so if you've used that, then you've known some of my work.

I also am a core collaborator in the NodeJS project.

But apart from that, I am from New Delhi in India and I now live in A Coruña in Spain.

It's quite sunny today.

As it is mostly always.

It's, there's been a heat wave But I love dogs.

So I have I know this is an online conference, but if, you have a dog, then I'm gonna pay you a visit.

No but one thing before I start one thing that I want to mention is that this is in no way something that I've tackled or am working on by myself.

This is the collaborative effort of a number of the, some of the most skilled engineers that I know.

So different organizations, including Igalia and Bloomberg and Google have come together to, to work on this next iteration of JavaScript that I'm gonna talk about in just a bit.

So diving in one fact that I might wanna rub into your face a little bit sorry in advance is that JavaScript is kind of ancient.

And, if you think about it, you start to realize it, right.

I mean a, lot of these programming languages that we compare ourselves with a lot of our contemporaries, like even Golang or Rust and Zig you know, now there's Carbon, all these programming languages are, quite new.

If you compare them to JavaScript For example, one, one of the first few users of JavaScript.

But, jokes apart JavaScript was conceived which is a word in itself in December of 1995.

A little bit of context.

I was born after this fact.

So you might or might not be but you will have to agree with me that 1995 was years ago, right?

Back then, computers kind of looked like this and, worked like this.

So the top of the class Intel Pentium Pro that was released that year had one core and it had maximum clock rates up to 200MHz and AMD equivalent had sort of slightly lower clock rates, but also had a single core.

However, these were like, you know, top of the line processors that were sort of being released back right then the, reality is that for a lot of us we, didn't use exactly these processors probably.

I, know I didn't, but even years later we would use customer processors like you know, Intel Celeron on is, what I used in my home computer.

When I was little years after 1995, it was also a single core processor.

Here's his tiny picture of, me playing on that Celeron computer with my older brother.

I couldn't beat him.

Hey I, can build video games now.

So Today's computers by the way, not to get sidelined by that are very different.

So for example the, new, is it even new anymore?

But yeah, the newer Samsung Galaxy S23 has eight cores.

Like the CPU in that phone has eight cores and, we can go into the details of what each of these scores are, but that's belaboring the point.

And well, if you use one of those newer MacBooks it might have like, 10 cores.

So, so all of these new ARM processors and, even non ARM processors have a number of pro cores.

And, we're starting to get like you know, Intel X 86 CPUs like the 12th gen.

I think it is right.

The newest Intel processors have, a number of cores.

So the, Computing industry evolved into quite a different corner.

I, feel than, where we started off, right.

Like I, CPUs used to be single core or, maybe more but they used to mostly scale by frequency.

And, I remember when I was younger, people would, care a lot about, well how many Ghz like, how many GHz of juice of CPU do you have?

Now we don't care about that anymore.

So much because CPUs don't scale by frequency not today, but let's talk about concurrency.

So.

Now that we have discussed, like the, sort of shifts in CPU design, how does this affect concurrency?

How does this affect parallel programming?

So a quick primer, I'm really sorry if, you know all of this but, just to get everybody on the same page, imagine that I wanted to get a bunch of tasks done, like, like these four tasks each task took a different amount of seconds, maybe some time to perform, right?

So A, B, C and D.

And I could do them all as you'd expect one after another in order.

And that would take me a certain amount of time, which is A plus B plus C plus D, or I if, they were not dependent on each other, if they were completely independent, I could try something more interesting.

I could try something else.

If I had the ability to perform multiple tasks simultaneously, that would make things faster.

Of course I could.

I could theoretically cut down the, time in half of course, as you can see from this diagram, it's not actually exactly half it's, it's the maximum of A plus C, which is two of the tasks that I club together and then B plus D the, problem might be quite clear to you at this point.

But basically during the course of this presentation what's happening here is what we will refer to as the thread-like model of computation.

If, you know if you have a little background about what threads are and, what threaded programming is, this might be quite familiar to you, but essentially the idea is that we have two threads of execution and they're both working simultaneously.

As you can see one of the, most common pitfalls of thread-like programming is this right?

If I organize tasks just a little differently, it's still not completely half and half.

Like it's of course that's theoretical, but not so easy practically.

Although over a long course of time, you could try to do that.

That said the idea is that just a different configuration of what goes, where changes things like, like this and, now as you might say, there's a certain amount of time that I'm gaining, just because I, did things more efficiently.

This is complicated.

Of course.

If, you are not super familiar with this, or even if you're super familiar, but sometimes it's just, you don't know how long these tasks are gonna take, you know and, you make the wrong call, you sometimes you just have to trust your gut.

So this is not like this is not a complete science it's a lot of finicky sort of guesswork and, it might go wrong.

So this is not my favorite model of computation.

I'm sorry, through all the thread like fans over there.

But let me introduce you to another one.

So let's take a deeper look into our tasks, right?

We, are back to the basics.

We have A, B, C and D all sort of operating sequentially.

And if you see deep down.

What is that?

Now all of our tasks are composed of subtasks, right.

And, they're not contigously composed of subtasks.

So let me give you like a, sort of example that makes sense.

Let's think that we have an async function called 'do' that takes some arguments and then it, fetches something.

So it waits for the network to respond and then it does some operation f1.

Then it fetches something else and then does another operation then fetches something else and then returns that value.

If you think about it, this function is basically wait, do, wait, do and wait.

And then of course do at the end, which is returning, but it's not a whole lot of computation.

The idea is that it's during the life cycle of, our function, 'do' it's not only that should have been await fetch by the way But you get the point.

The idea is that it's not always continuously working.

It's not always continuously using the CPU juice, a lot of times just lazing around, waiting for things to happen, like the network or the disc to respond.

Right.

So as I just mentioned network calls are not the only sources of waiting time.

However since we are JavaScript developers a lot of us are writing Web applications.

They're certainly the most common ones that you'll encounter purely statistically, I suppose.

What if I did this weird thing.

Okay.

Let's say that I can only execute one task at a time.

But I, just kicked off all the tasks at once and then executed, whichever was ready-if more than one were ready, of course I could do them one at a time.

But if some were waiting, I wouldn't have to do anything.

So what if I did that?

So let's stack them up all on top of each other and let's say, so we start at the beginning and then, okay.

We get the first subtask, let's do that.

And then we move on and then the second subtask of, a totally different task, let's do that.

And then that, so, so we keep moving and, whatever's ready we focus on that.

So this is basically a task or event loop, which you might be familiar with.

It's an ingredient that is present in virtually every modern JavaScript runtime.

And this is during the course of this presentation, what we're gonna call the Web-like model of computation, so look out for that emerging, that case.

Moving on as we've just discussed, there's two models of concurrency, right?

There's the Web-like, and there's the thread-like.

Well what's, the difference, right?

Web-like is mostly composed of run to completion kind of mechanisms.

So things like workers, right?

So you, what happens when you create a worker, you start a worker and then you give it a certain amount of code to run and, then it runs to completion, right?

There's not a lot of cross mingling going on.

You just let the worker do stuff on its own more or less.

So, so that's sort of simpler to wrap your head around, I hope.

There's sometimes message passing happening between these different workers, something that you might achieve with postMessage in JavaScript.

There's async APIs.

So certain calls are asynchronous, like you know, calls that returns promises in JavaScript.

And the, most important part, at least for me, because I like my programming to be easy, is that by design, there, there are, no data races and there's complete data isolation, right?

If, you're just returning promises, if you're just using workers there's, no shared memory.

There's no need for synchronization, which means that you don't run into some of the classical problems of concurrent programming like the thread-like model.

Moving on, talking of the devil in the thread-like model, you have synchronous APIs and manual synchronization, right?

So there's a lot of these different synchronous tasks that are being executed all in parallel.

And then you have to synchronize between them manually for this JavaScript has atomics, which allows you to define atomic operations.

Then we have the concept of data races now because there's the idea of shared memory different threads can access shared memory through things like shared arrayBuffers.

And more or less, this is, a diagram.

It's a rudimentary diagram that I made, which tries to show what a Web-like system model looks like, right?

So there's these different event loops that might be working all together.

So, you know, you don't have to do only one thing at a time.

If you can do more things at a time, you can still use an event loop.

And each sort of running instance has its own memory and, they can sort of interact with each other somewhat.

And then there's the thread-like where you have these different, sort of parallel running execution threads and then each having their own memory, of course, and then some blocks of shared memory that they're each accessing but, not freely they have some locks here, those tiny emojis that I used.

But you get what, I mean.

The reality, unfortunately, JavaScript is quite a bit more complicated.

So we have, as I mentioned, a little bit of both and they're sort of just working together.

So how do we make sense of this and how we, how do we move forward?

Talking about the Web-like model.

It has its own goods and bads, of course the goods are that it's easy to use and easy to reason about I, I think virtually everybody who writes JavaScript uses this every single day and well, while promises can sometimes especially when you're starting out, be complicated to wrap your head around, I think that it's certainly easier than something like threads, which, I mean in, my case we had two semesters worth of classes at university and, that's not ideal, is it?

I mean so yeah, so one of the best features for me, at leasrt for Web-like, is that it's easy to comprehend and it's easy to work with.

At the same time some of the reasons why that is, is that it's causal.

It things happen in a certain way that you can make sense of.

And that, as I mentioned just a little bit ago it is data race free by construction, which means that unless you, put in some contraptions, things should be fine.

And at the same time, there's isolation, it's more or less the same thing.

Another thing is that generally with the class of applications that we usually build with a lot of wait times and so on asynchronicity generally corresponds to smoother applications and smoother experiences and, generally a lot of us build interfaces.

So asynchronous interfaces, I can assure you are smoother than interfaces that rely on other concurrency models because of the nature of asynchronous programming.

It's also I guess this could be rapid the first less focused on manual synchronization mechanics.

So you don't need to learn a lot about locks and queues and semaphores and mutexes, which are also locks.

But you get the point-it's, it's a lot of this complicated stuff you don't need to think about it.

Give focus on building your application.

There are bads, however which is that leaves performance on the table.

It, lets the the, event loop implementation in most cases decide what's best.

And, that's that.

And now, of course, if you're an expert who spent years learning how to optimize code and, write parallel programming you might be able to minmax your way out of this and, essentially build a application that is more performant than something that uses the basic event loop.

That's a possibility, but it's, very difficult.

So, more or less.

On the thread-like side there's the goods.

So of course the, one of the biggest good for me at least is the Web assembly interop.

Now if you're not familiar, TL;DR Web assembly has support for lightweight threads, which is great for Web assembly and, certainly makes sense for them more than it it makes for us, for sure.

But the important part is that, you know, when you write threaded code on Web assembly and when you're trying to interoperate on the JavaScript side, it makes sense, right?

Like conceptually it makes sense to, to write threaded code and it, interops altogether.

Not only does it interop well with Web assembly, but more specifically it interoperates well with WasmGC.

Now, WasmGC, again, TLDR is a new sort of proposal.

I think it's beyond just a proposal now in Web Assembly it's coming to a reality, but it's it, exposes a lot of very interesting things for the garbage collection and, that is also assisted with the thread-like model.

So I, think that those are sort of it's, two biggest benefits, right?

Another one is, as I mentioned in the last right, the pros of one might be the cons of the other is good performance.

Thread-like if you try hard, it can be quite performant and, you can ,minmax your way out of every single optimization problem, if you worked hard enough.

And you can believe that in, some of the biggest engineering powerhouses, people are doing exactly that.

Bads, however, as I said sort of translating from the other side-it's hard to reason and use which doesn't apply for the Web-like model.

It also relies on manual synchronization, so every single operation that you do in the shared space outside of your own isolate, it needs to be synchronized manually using tools like locks and, so on.

There's also the possibility of data races, which you need to, avoid it's a science in of itself.

It's something that people take long courses on.

It's not so easy.

And I, speak this from experience.

At the same time there's from time to time acausal astonishment.

So things can sort of happen out of order.

All of these combines sort of create a effect that I, like to call "must be this tall" which I'll show you in, just a bit creates an effect where people feel that they're not quite ready to, work with this model yet.

And it also exposes more timing channels, although with spectre and meltdown and all of those that's more or less water under the bridge at this point, but yeah, just FYI.

Yeah, this is what I meant when I said it creates a "must be this tall" kind of effect.

So yeah, it certainly makes you feel that way.

For sure.

Now.

All of this said TC39 has an interesting problem.

Right?

Should we focus on the Web-like side, which totally exists?

Or we focus on the thread-light side, which also exists in JavaScript.

And we have an interesting solution to this, which is let's focus on both of them.

So let's, break this initiative down in phases.

What are we gonna do?

And when are we gonna do it?

So on the Web-like side, what do we need for the first initial phase?

What's, the base basic minimum things that we ought to do first is that we need language support for asynchronous communication.

And, secondly, we need the ability to spawn arbitrary units of computation.

And, hold yourself right there.

So I'll reveal a bit more about these in, the next slide.

But yeah, these are the minimum set of items that we need in order to build asynchronous applications.

On the thread-like side, we need something to have shared memory.

We need a basic synchronization primitive to do synchronization on top of that, and we need the ability to spawn threads.

Of course there, there cannot be threads without threads.

Right.

Now if you paid attention to the last slide, you'll realize we're basically done here.

Everything that I talked about in the first phase is done.

We have promises, we have a async/await which makes everything perfectly ergonomic.

We have workers which can be threads of execution.

We have SharedArrayBuffer, which is shared memory and Atomics, which can help you do synchronisation.

So let's move on.

Let's not spend too much time in the past and, let's move on to the phase two.

Now on phase two, on Web-like we are, trying to solve the problem of ergonomic and performant data transfer and the problem of ergonomic and performant code transfer.

Well you know, we'll get deeper into that, but on the thread-like side we would require higher level objects that allow concurrent access.

Because right now everything's a mess.

We have normal objects that are a mess.

And we need higher level synchronization mechanisms.

So all the pain points that we have right now are sort of composed into this one slide, if you think.

Yeah.

As I said, it's designed to address the biggest observed pain points, this time on the Web-like side.

So transferring data in JavaScript when you're utilizing this Web-like model is expensive, right?

Transferables the, kind of things that you can transfer across different contexts is very limited.

There's always the weird reparenting of prototype when things do get transferred.

So if you rely a lot of project on, prototypes, it's not so easy to transfer them around.

Often I've seen this happen over and over again.

And, I don't blame developers, but often they would copy like deep copy objects on, one side to the other.

And, this of course is probably one of the big reasons why there's this weird reparenting of prototypes that is happening.

Transferring data is also unergonomic.

I mean, yeah, it is expensive, but it's also not so easy to do.

It often requires you to serialize your entire object like JSON or, and then deserialize it on the other end, which is like not the most ideal way to do things.

This results in identity discontinuity, which means that if you take thing from one realm and throw it into another and take it back, it's a completely different thing now.

So the identity of objects no longer holds.

Transferring code, however, like the biggest elephant in the room, I talked a lot about data, but transferring code is basically impossible.

We transfer strings and we, we throw them into execs and that's basically it.

That's not great.

Is it like, yeah, we shouldn't call it a day.

We should focus in more on that.

So, to fix that a proposal that's, something we're working on right now is module blocks.

It aims to solve the problem of ergonomic sharing of code.

It is spearheaded by Surma who works at Shopify.

And it looks something like this.

So if you notice now you have like a module block, like you literally say module and then you start a block.

And all the code inside of that is, is a module in itself.

And you can assign that to a variable and you can then throw that variable around.

You're essentially transferring code between different parts of your, or of your code?

Well, you can import that block and, run that.

You can call it inside of a Realm if, you don't know what that means, follow up on the shadow realms proposal.

It's awesome.

But yeah, so module blocks.

It fixes one of the problems.

Another upcoming proposal that we have in the pipeline it's is called shared disjoint heaps.

So it's sort of more of a concept until now, but it aims to solve the problem of both ergonomic and performant sharing of data as well as code.

The idea is that it allows you-so right now in JavaScript, you're not mindful, when you create variables and so on, you assume that there's one heap where these things are happening.

Right.

And of course if you run a different process, running JavaScript code, or like open another tab, maybe that's on another heap, but these are not working together.

In thanks to this proposal, you should be able to separate your heap into different, like composable heaps and the agent-local heap the, main heap that you that's basically the only one that you have access to right now can point into these distinct sort of shareable heaps.

And the shareable heaps cannot point to the agent-local heaps, of course, because it's you know they, cannot depend on them.

But the unit of sharing then would be these transferable heaps.

Right?

You can have one heap that, that runs some code and that contains some data and they can get through them around.

And that, that would work.

On the threading side.

Let's see what we have.

Well, surprise, surprise is also designed to address the biggest observed pain points.

So first of all, addressing the elephant in the room, nobody knows how to use SharedArrayBuffers and Atomics.

Well, I've been giving this talk in in-person conferences-I cannot ask you to do a raise of hands, but a show of hands.

But I, I, Well, tell me if you know how to use either of these well on Twitter and, we'll have a chat.

But I, feel that people don't know how to use them.

Well, at least I don't.

And, some of the folks who design them are, equally clueless as I am.

And it's not their fault.

I mean, these are difficult to use features because the impedance mismatch is quite high and, they're not exactly the simplest to wrap their heads around.

Well, that's something that we need to fix.

And a proposal to deal with some of those problems is 'structs'.

So it aims to solve the problem of higher level objects that allow concurrent access.

As I mentioned right now, we just have raw objects, which are a mess when you do them with threaded access.

This is spear headed by Shu, and it looks something like this.

So you can start your classes with the struct keyword, and this will create a struct instead of a, normal object, which means that it's sealed and you cannot extend the prototype and so on.

Okay.

So it's been a lot, but bear with me here.

With all that done.

What does the future phase look like after we're done doing what we're doing right now?

What do we have planned for the future?

What can we do that can help things further?

Well, on the Web-like side, I think that we can have a lighter-weight actors.

I feel that the cost of spinning up workers is quite high, which means that you cannot have the paradigm where you can have a lot of really tiny tasks and you can just spin up a new worker, do that, and then spin up another and do that.

The, cost is too high.

Now of course, people are working on fixing that in many ways, but, with some success it's, not always implied.

So maybe we can have lighter weight actors which would make people's lives easier.

We can also provide more integration with scheduling APIs.

Maybe you can have some insight into how the event loop is doing.

And so on.

This is a, more advanced feature.

And some that people feel is not quite JavaScript-y and it's too much, but let's say that it's certainly a possibility.

We can also have a concurrent standard library.

We could have a standard library that pretends promises and so on.

On the threat-like side, we could have better tooling, that helps people develop these kinds of applications with more confidence I think.

We can have tighter integration with WasmGC, although that's mostly a given at this point I, think we could always do better.

Right.

And again, a concurrent standard library.

Maybe that's the need of the ?? I don't know.

You tell me But yeah, before I finish off, I'd like to give special thanks to Daniel Ehrenberg Shu-yu Guo, both of whom are, have been very helpful through the course of my work, as well as this presentation, as well as the organizers.

I know I'm quite difficult, but thanks for bearing with me and thanks for inviting me.

It's been an absolute pleasure to be there and be here.

And with that, thank you very much.

JavaScript is increasingly running everywhere.

And there is a subset of Web platform APIs that are becoming ubiquitous across every JavaScript runtime.

In this final talk of Global Scope, James Snell, core contributor to both NodeJS and Cloudflare Workers, and NodeJS technical steering committee member will introduce the JavaScript standard library, and the ongoing efforts to define it.

Hello, everyone.

I hope you are enjoying global scope.

I am James Snell.

I'm a core contributor to Node JS, a member of the Node technical steering committee and I'm systems engineer at Cloudflare on the worker's runtime.

Today, I'm here to talk about the work to establish a standard API library of sorts for JavaScript run times.

Start with an example.

What is the correct way of generating a SHA256 digest over a stream of data in JavaScript?

Is there just one way of doing it?

Unfortunately, the answer is no.

If you are using Node.

The way you'd generate that digest is different than if you are running Deno or Cloudflare Workers or Web browsers or any other JavsScrip run times that are out there available for you to use.

Now a lot of the run times like Deno and does have a Node compatibility layer that's being developed.

I mean there's ways you can make it work from one the same way across platforms, but outta the box it's not easy and they work differently.

When I first started contributing to Node seven years ago, I was repeatedly told by several of the core contributors at that time, that Node was not a Web browser and it should never behave like one, the reasoning this reasoning has been used for years as the justification for Node not to implement the same kinds of APIs that developers have available in Web browsers.

Readable streams for instance.

Right?

Or Web crypto or abort signal, event target, blob, file.

Whatever.

Node has a streams, API.

Why would it need to implement this browser streams API?

Node has a crypto module.

Why does it need Web crypto?

Node has event emitter.

Why does it need event target?

There's been this ongoing conversation for years.

And, this reluctance on the part of the Node project to look at these things that are that are happening out in, browsers and say, "Hey, you know what, maybe those APIs would work here.

Maybe we should provide them here".

And the justification has always been well we're not a browser we don't need it.

And while it is true, that Node is not a Web browser.

The fact remains that there is still significant overlap and the kinds of things developers need these platforms to do.

It doesn't matter if your JavaScript is running on a browser or on the edge or on a server or on IOT device, you'll still need to access access into the same fundamental capabilities.

You're still gonna need to be able to stream data.

You need to be able to parse the URL.

You need to be able to schedule and cancel asynchronous tasks.

You access crypto these things are ubiquitous, so it doesn't make any sense for them to be different in every single JavaScript runtime you're running.

Why is there a separate API for for streaming in Node versus everything else?

Why is there a separate would be, there would be a separate API for the file system access?

These things should work the same way, no matter what JavaScript enabled run time you're using you are after all, all using the same programming language.

And if you look at other languages, know C++ has a standard library Go Rust all these other environments have these standard libraries and you can write to any standard APIs that you can write to so that things will just work.

But in JavaScript, we haven't had that.

All right.

For the past year I've been working directly on Cloudflare Workers.

Workers is a JavaScript runtime, serves HTTP requests and more from a globally distributed edge network.

It sits between your browser and an origin server.

Runs JavaScript there in the middle.

One of the top requests we get from users, is the request to support any arbitrary JavaScript module published out on NPM just like XYZ module, just install this and, make it work in Workers.

There are a number of modules that will just work and unfortunately reality is that there are many more, most of them, they just simply don't, simply because Workers is not Node.

And most of those modules that are in are NPM have been written going back for the past 10 years, 11 years of Node JS APIs they're using things like the buffer API, which is specific to Node or Node streams or Nodes event emitter.

All right.

These are things that only exist in Node or are under Node's control and Node can make changes whenever they want.

And we just don't have those, the implementations of those APIs in the Workers environment.

know, Rightfully so we don't like for instance, we don't have a file system in Workers.

Having a file system API doesn't make sense.

But it's there, there's all these things that just out on NPM that are just written for a very specific platform, but need to be used everywhere.

Now, Worker's, users have found their way around such limitations, by using polyfill implementations of these APIs and they take those polyfills, which are just like pure JavaScript implementations of, these Node core things.

Sometimes they're, just handicapped.

They don't actually do what the full features of, a Node API, but they, get close.

And and they'll take these JavaScript polyfills and bundle 'em with their worker scripts and deploy those.

And often these are the same polyfills that developers use in the browsers.

They're out there.

They, work to an extent and while we absolutely can make things work using polyfills the fact that we have to rely on them is, pretty unfortunate.

Now these are after all just JavsScript runtime environments, why can't things work the same way across multiple, these multiple environments without relying on polyfills of platform specific APIs?

Certainly where those areas of overlap exist.

Modules that are out there doing crypto why do they have to depend just on Node core APIs?

Why can't there be a standard?

Fortunately there are standard APIs out there that we can use to start helping drive more alignment between JavaScript runtimes.

All these are APIs that developers are already familiar with.

We're using in the Web browsers-readable stream for instance, or URL or a abort controller, event target things I've already mentioned here developers are already using these millions of developers around the world in, millions, upon millions of applications are already using these things.

They are, they've already gone through standards process.

WHATWG or W3.

And, they've already been tested and they've already been they definitely work, so why can't we just use these APIs rather than creating something new or creating something specific to any one platform?

So the standard API for JavaScript are all the APIs, the developers are already using in Web browsers today.

And it's, important to recognize where these are, but also recognize some of the limitations for these things.

It, it was probably about five years ago that we introduced URL to Node.

There's always been a, an API in Node already URL dot parse for parsing URLs.

But about five years ago we, started receiving some reports of some bugs in that implementation.

And I started looking into those and found that I could not actually fix those bugs without completely killing the performance of URL parse.

That code has been very highly optimized to be fast.

And every time I started poking around in the code, it felt times if I just looked at the code wrong, it would slow down.

That code is so complicated, so complex that there was really no way to get the proper handling the proper parsing of, all of URLs in all edge cases, without just completely having to almost rewrite the thing in order to maintain performance.

So rewriting is exactly what I did.

I took it as an opportunity to say, Hey, you know what, there's a standard URL class out there.

There's a, this URL object that exists in browsers.

There's a whole spec that says how this thing is supposed to work.

That defines the exact parsing steps.

Exactly what you're supposed to do.

And it's been battle tested.

It's been developed for it was developed for years.

And browsers had, worked on these, this parsing algorithm for a long time.

And it was like it's about time.

Let's just get this in Node.

And at that time it was a very controversial thing.

Introducing this second URL parser implementation.

But we got it in there and found that yeah, there were still developers that wanted to keep using URL Parse and still do today despite its bugs and, issues.

But having that URL class available in Node, the same API that was available in browsers.

Helped a lot of developers not have to write special case code.

They wanna write JavaScript once that it runs in multiple environments.

It's nice that they don't have to am I running in Node?

Okay.

I gotta use URL parse I'm running in a browser.

Okay.

Then I gotta use new URL.

Not, they can just use new URL.

It's available everywhere.

That same URL class is available in Workers it's available in Deno it's available in Bun.

Having this, API there just made things easier for developers and it took several years, but we, we started making progress on other APIs as well.

I think it was 2019.

I introduced the Web crypto implementation in, Node Web crypto API.

It's not the best.

It's, nowhere near as as feature packed as Node's crypto API-doesn't need to be it's just it's a standard API.

That's there.

If you wanna do some basic crypto operations-signing and verifying encrypting and decrypting using certain algorithms,.

You can do that in a consistent way, whether it's in Node or Deno or, Workers or anywhere.

It's the same API.

Last year right about this time last a year ago I introduced the Web streams.

It's a readable stream writeable stream a transform stream even though Node has had a streams API for going on almost 10 years now that was very specific to Node itself.

There is now this implementation of the same streams API that you find in the browsers and that you find in Workers, in Deno.

So now it is possible to create stream code that calculates the streaming digest of data, I going back to our original example in a way that works, whether you're running in Node, whether you're running in Deno, whether you're running Workers regardless you can write the code once and it just works in all of those environments.

So working towards going back to this original idea, the standard API for Node and that being that, the APIs that developers are already familiar with.

The challenge, however, has been a lot of these APIs have been written specifically with browsers in mind.

The fetch API is a good example of this.

Fetch is a standard, right?

It works and we've received lots of requests in Node to, provide an implementation of fetch and there is one now the challenge with it, however, is that there is a lot in fetch that is only relevant to browsers.

CORS support, for instance there's a lot of requirements specifically relating to the CORS standard, but only is relevant to Web browsers.

There's nothing about the CORS requirements that make any sense whatsoever when fetch is running on a server or on the edge, or really in any environment, that's not a browser.

Web streams.

There's a lot about Web streams that are are suboptimal for running in server environments, where you have where your JavaScript's not just running in a single tab in your browser on one person's computer where but instead running in a server, a shared server environment that's serving hundreds, if not thousands if not millions of requests.

There's a number of performance issues and things that, that just are are unfortunately part of the specification itself.

And the reason those things exist the reason for those limitations is that the, WHATWG and the, working groups at the W3C that have developed these standards, over the years have been specifically chartered only to look at the requirements of Web browsers and it [still falls through the work???].

It's just, it's the result of the charter.

The working group is just focusing on the, problem that's been laid out before them, which is specifically the needs of Web browsers.

When Node goes off and implements one of these APIs and says this it's unfortunate that it works this way because we needed this requirement for, servers the, feedback from WHATWG and others have these other working groups has been well, great.

Go do that if you want.

We're only concerned about browsers, so if you're not a browser it's not something we're going to, look at.

And, that's changed recently.

There's, been a lot more consideration and cooperation and, stuff with these groups.

Looking at these requirements for Node and, Deno and others, but that's just been historically where things have gone.

So there really hasn't been a good venue for folks you know from Node and Deno and Workers and stuff to get together to say, "okay, great, we all wanna implement these APIs, but how do we make sure that we're doing so in a way that's interoperable and compatible with each other." So a few months ago, I got together with a few others in the JavaScript community, including fellow Node core contributors and folks from Deno, the company, Vercel, Shopify, and others to launch a new W3C community group called the Web interoperable run times community group, or WinterCG for short.

The goal of WinterCG is simple: to provide a space for JavaScript run times to collaborate on API interoperability with a specific focus towards building and maintaining compatibility with Web platform APIs.

Among the initial work items of this community group is the establishment of a new minimal common Web platform API.

What this is, essentially a minimum list of APIs that all the different run times can implement.

It's things like abortController, abortSignal, readableStream, compressionStream, right event, target all these APIs that exist in browsers today.

What this minimal list that the WinterCG is putting together is basically just saying that, Hey, if you are a JavaScript runtime, and you wanna be compatible with browsers, implement these set of APIs, implement abortSignal, and implement it in a way that is spec compliant com compatible with the browsers.

Provide an implementation of eventTarget, even if Node already has eventEmitter that works a particular way.

Node will also now has eventTarget, right?

That works the exact same way as the browsers Node has the implementation of crypto subtleCrypto, even though it has a crypto API, it also has a subtleCrypto.

The development of this list of this minimal list is the, way we put this list together was that we took we took a look at Node, we took a look at Deno and we took a look at Workers the three in a most popular non Web browser, JavaScript run times that are out there today.

And we looked at what web APIs, what Web standard APIs that also exist in browser browsers, did these platforms already implement?

And we just compiled that into a list as long as it was implemented by at least two of the run times that ended up in this list.

And what we find is a, list that covers a, broad majority of the overlapping requirements between these platforms they all need to do asynchronous activity cancellation or scheduling.

They all need to do streams.

They all need to do crypto.

They all need to to, to parse URLs, that kind of thing.

So we started to see this this, common collection of APIs that, that, are being put together.

Now some of the other work that the community group is, working on is out in fetch for instance, like I was talking about, there's a lot of requirements there in fetch that are only relevant to Web browsers.

There's a lot of API choices there that really only makes sense in Web browsers.

So when, when we've gone about implementing fetch in Node and Deno and Workers and others there's compromises that have been made about we're gonna get as close as possible to what's in this spec.

But we're only gonna implement what actually makes sense on, in server environments.

Unfortunately Opera implementing these things in a vacuum independently of each other.

There's a number of incompatibilities that have arisen between the, implementations across these different platforms.

So, WinterCG is providing a place for, these implementers to get together and start talking through these issues.

What are we gonna do with cookie headers?

What do we do with, CORS?

How do we deal with those things?

How do we deal with global states that may be required by fetch and those kind of questions.

And how do we make sure that when you implement, when you're calling fetch in Deno, that it works the same way as fetch in Node, that it works the same way as fetch in in, in, Workers and in really driving the consistency and interoperability between those platforms.

And the goal of this new community group is not to go out and invent a bunch of new standards.

But we can, if we if, we want to and if we need to if the, implementations get together and start working on some set of APIs or some set common set of requirements.

And we take that to the WHATWG and, they say, well, browsers don't care about this at all.

Or we take it to the W3C, and they're like browsers don't really care about this.

We don't care.

We still have this venue where within WinterCG where we these other implementations can talk about it and works kinda work up a spec for the to make sure that things are still consistent.

And still done in an interoperable way.

The work of this community group is just starting.

It we're, still really just at the beginning of this work and there is so much more to do.

If this is an area that interests you come and get involved all of the community group's work is being done out in the open out of GitHub.

The Github address on the screen below right here we do have biweekly calls that we are that we have right now to work through the, work streams, the different items and you things like fetch and and those kind of things that we're working through.

But yeah, the can I just going back to, like I said the, standard API that we're developing, it's just the stuff that you already use in, browsers today and the interoperability and the compatibility of these things in these platforms is just gonna continue to get better.

And hopefully we'll get to a point where regardless of where your JavaScript runs you will have the common APIs common ways of of accomplishing the, common tasks.

And just make a life easier for everybody.

Well, with that that, that is all, I hope you all enjoy the rest of your Global Scope and talk to you all later.

That wraps up Global Scope for 2022, but just before we head off a few important thank yous.

Thanks so much to all our speakers over the last two weeks.

Nishu Goel, Chris Garrett, Ashley Claymore, Adam Bradley, Dan Shappir, Valeri Karpov, James Sinclair, Cecilia Martinez, Luca Casonato, Ujjwal Sharma, and James Snell.

A huge thanks too, to Twilio our longtime supporters.

If you work with voice, video, email, or any sort of realtime messaging, please make sure you check out their platform.

I very much want to thank Mattheus Sequeira, who takes all the hours of raw material from our speakers and turns it into something much more than just that.

And a huge thanks to Sean Wang as well for shaping such an amazing program.

And lastly, before we go, we have much more planned for 20, 22 and beyond with three more conferences, all focused on different aspects of front development.

Our long running code conference now focuses on browser APIs and developing progressive Web apps.

And for 2022, it's hosted by and programmed with Maximiliano footman.

Access all areas is our accessibility conference from a front end engineering perspective and it's programmed and MC'd by Sara Soueidan.

And finally Safe is our privacy, security and identity focused conference.

Again from an front end developer perspective.

And for those in, near, or maybe wanting to come to Australia, our long, long running Web Directions Summit returns to Sydney, December 1, and 2.

People are very excited about this as we are, and we'd love to see you there.

And one last, thank you.

Thanks to you for turning up for Global Scope again this year.

We deeply appreciate that and look forward to seeing you before too long at one of our other online conferences or perhaps even in Sydney in December.

Take care and bye for now.

Javascript function composition:What’s the big deal?

James Sinclair

Atlassian Logo

JavaScript Logo

Function Composition

Image of an early modern magical illustration

What is it?

h(x) = f(g(x))
 f g
 
h(x)

Diagram shows a value x passed into a function g, and the result g(x) being passed into the function f, with the final resul h(x)

Diagram shows the value of x passed to a function h with the result being h(x)

h=f•g
const h = (x) => f(g(x));
const h = f • g;
const c2 = (f, g) => (x) => f(g(x));

Something real

![alt text goes here](/link/to/image/location.png)
[link text goes here](https://example.com/)
const imagify = str => str.replace( /!\[([^\]"<]*)\]\(([^)<"]*)\)/g, '<img src="$2" alt="$1" />'
);

const linkify = str => str.replace(
/\[([^\]"<]*)\]\(([^)<"]*)\)/g,
'<a href="$2" rel="noopener nowfollow">$1</a>'
);
const linkifyAndImagify = c2(linkify, imagify);
const linkifyAndImagify = str => linkify(imagify(str));
const emphasize = str => str.replace( 
	/_([^_]*)_/g,
	'<em>$1</em>'
);
const processComment = c2(linkify, c2(imagify, emphasize));
const processComment = str => linkify(imagify(emphasize(str)));
const processComment = c2(linkify, c2(imagify, emphasize));

Wouldn’t it be nice

const processComment = linkify • imagify • emphasize;

ALAS

Multivariate composition

const compose = (...funcs) => {
	// Because we've used rest parameters syntax, // funcs is an array.
	We'll do something with // it in here.
}
const compose = (...funcs) => {
	const newFunctionToReturn = (x0) => funcs.reduceRight(
    	(x, f) => f(x),
     		x0
	);

	return newFunctionToReturn; 
}
const compose = (...fns) => (x0) => fns.reduceRight(
	(x, f) => f(x),
	x0
 );

### Level 3

const headalize = str => str.replace(
	/^###\s+([^\n]*)/mg,
	 '<h3>$1</h3>'
);
const processComment = compose( linkify,
    imagify,
    emphasize,
    headalize
);

Processing comments with compose()

Diagram of data flow from top right to bottom left. A string is passed to linkify(), then the result to imagify(), then the result to emphasize(), then the result to headalize()

str => linkify(imagify(emphasize(headalize(str))))
const processComment = compose( linkify,
    imagify,
    emphasize,
    headalize
);

Flow

// We call this function ‘flow’, as the values flow 
// from left to right.
const flow = (...fns) => (x0) => fns.reduce(
    (x, f) => f(x),
	x0 
);

`Code`

const codify = str => str.replace( /`([^`<"]*)`/g, '<code>$1</code>' );
const processComment = flow( codify,
    headalize,
    emphasize,
    imagify,
    linkify
);

Processing comments with flow()

Diagram of data flow from top left to bottom right. A string is passed to linkify(), then the result to imagify(), then the result to emphasize(), then the result to headalize()

const processComment = (str) => linkify( imagify(
        emphasize(
            headalize(
                codify(str)
            )
		)
 	)
);
const processComment = flow( 
	codify,
    headalize,
    emphasize,
    imagify,
    linkify
);
const processedComment = flow( 
	headalize,
    emphasize,
    imagify,
    linkify,
    codify
)(commentStr);

Pipe

const pipe = (x0, ...fns) => fns.reduce(
	(x, f) => f(x),
	x0
);
const flow = (...fns) => (x0) => fns.reduce(
	(x, f) => f(x),
	x0
);

Processing lots of comments

const map = f => arr => arr.map(f);
const filter = p => arr => arr.filter(p);
const take = n => arr => arr.slice(0, n);
const join = s => arr => arr.join(s);
const itemize = str => `<li>${str}</li>`;
const orderedListify = str => `<ol>${str}</ol>`;
const chaoticListify = str => `<ul>${str}</ul>`;
const mentionsSecret = str => (/\super secret tech\b/i).test(str);
const noSecrets = str => !mentionsSecret(str);

Processing lists of comments

const comments = pipe (commentStrs,
	filter( noSecrets)
	take(10),
	map(processComment),
	map(itemize),
	join ('\n'));
const comments commentStrs
	•filter (noSecrets)
	•slice (0, 10)
	•map (processComment)
	•map(itemize)
	•join('\n');
const comments = pipe (commentStrs,
	filter( noSecrets)
	take(10),
	map(processComment),
	map(itemize),
	join ('\n'));
const comments commentStrs
	•filter (noSecrets)
	•slice (0, 10)
	•map (processComment)
	•map(itemize)
	•join('\n');

Wasn’t there some TC39 thing?

const comments : commentStrs
	|› filter (noSecrets)(#)
	|› take (10)(#)
	|› map(processComment)(#)
	|› map(itemize)(#)
	|› join ('\n')(#) ;
const comments = pipe(commentStrs,
	filter (noSecrets),
	take (10),
	map(processComment),
	map(itemize),
	join ('\n'));
const comments = commentStrs
	•filter (noSecrets)
	•slice (0, 10)
	•map(processComment)
	•map (itemize)
	•join(' \n');

Why bother?

const comments = pipe(commentStrs,
	filter (noSecrets),
	take (10),
	map (processComment),
	map(itemize),
	join(' \n'),
	chaoticlistify,
);

What’s the big deal?

const withoutSecrets		= 	commentStrs.filter(noSecrets);
const topTen			=	 withoutSecrets.slice(0, 10);
const commentList 		=	topTen.map(processComment) ;
const itemizedComments	=	commentList.map(itemize);
const joinedList		=	 itemizedComments.join('\n') ;
const comments			=	 chaoticListify(joinedlist);
const comments = pipe(commentStrs,
	filter (noSecrets),
	take (10),
	map (processComment),
	map(itemize),
	join(' \n'),
	chaoticlistify,
);

Statements vs. expressions

So what?

const processComments = flow(
	filter (noSecrets),
	take (10),
	map(processComment),
	map (itemize),
	join(' \n'),
	chaoticlistify,
);
const processComments =	(commentStrs) => {
	const no Secrets			=	commentStrs.filter(noSecrets);
	const topTen				=	withoutNazis.slice(0, 10);
	const commentList			=	topTen.map(processComment);
	const itemizedComments		=	commentList.map(itemize) ;
	const joinedList			=	itemizedComments.join(' \n');
	const comments				=	chaoticlistify(joinedList);
	return comments;
};

=

const processComments =	(commentStrs) => {
	const no Secrets			=	commentStrs.filter(noSecrets);
	const topTen				=	withoutNazis.slice(0, 10);
	const commentList			=	topTen.map(processComment);
	const itemizedComments		=	commentList.map(itemize) ;
	const joinedList			=	itemizedComments.join(' \n');
	const comments				=	chaoticlistify(joinedList);
	return comments;
};
const processComments = flow(
	filter (noSecrets),
	take (10),
	map(processComment),
	map (itemize),
	join(' \n'),
	chaoticlistify,
);

A big deal

Relationships between expressions

Prove it!

const map = f => function*(iterable)
	{ for (let x of iterable) yield f(x);
};
const filter p => function*(iterable) {
	for (let x of iterable) {
		if (p(x)) yield x;
	}
}
const take = n => function*(iterable) {
	let i = 0;
	for (let x of iterable) {
		if (i >= n) return;
		yield x;
		i++;
	} 
};
const join = s => iterable => [...iterable].join(s);
const processComments = flow(
	filter (noSecrets),
	take (10),
	map(processComment),
	map (itemize),
	join(' \n'),
	chaoticlistify,
);

Performance... maybe

Not the point

Intent vs. implementation

Illustration showing three layers. The top layers is " Layer of intent–flow(...)". Below is "Anti-corruption layer map(), filter(), take(), join()". Beneath this is a layer of two implementation approaches, array methods and generators

Function composition: A big deal

Thank You.

Photo of a dog in a chemistry lab wearing safety goggles with the caption "I have no idea what I am doing."

Illustration of the debugging dartboard. Three concentric circles in increasing size. At the centre is 'console.log'. Elsewhere are Devtools, debugger, and other approaches to debugging.

Developers spend up to half their time debugging and maintaining software.

As Cecelia speaks a series of boxes connected by arrows appears illustrating what might be happening in an app after a button is clicked.

Fight complexity by identifying patterns within frameworks

TypeError: "x" is not a function
  • typo in function name
  • wrong prototype
  • variable is wrong type

Icons representing developer tools, search and a bug

Framework DevTools

  • React
  • Angular
  • Vue
  • Redux Dev Tools (Also works on Vuex, NgRX, Mobx, and more!)

Framework DevTools

  • Browser or IDE
  • Inspecting (elements, state, values)
  • Evaluating (execution, control flow, performance)

Screencast of debugging a Vue application. Cecelia describes wat os happening.

A series of jars, one of which has lightning inside.

A repeat of the slide representing events that occur when a button is clicked.

Icon for Replay

Screencast of using Replay to debug the Vue app from before. Cecelia describes the process.

Maybe the real treasure was the bugs we fixed along the way

Patterns

  • Component Rendering
  • What triggers a render?
Embrace bugs intentionally with a learning mindset

Thank you!

@ceceliacreates

Writing universal libraries for Deno, Node and the browser

Luca Casonato

About me

  • Work at Deno Land Inc as software engineer
    • Open source Deno CLI
    • Deno Deploy edge compute offering
    • Fresh: full stack web framework for Deno
  • Web standards work
    • Delegate at TC39
    • Contribute to W3C and WHATWG specifications

What is Deno?

A modern runtime for JavaScript and TypeScript.

  • Follows web standards - fetch, import maps, esm, web workers, web streams, promises
  • Built-in utilities - linter (deno lint), formatter (deno fmt), test framework (deno test), editor integration (deno lsp), and more
  • Secure by default - no file, network, or environment access, unless explicitly enabled.
  • TypeScript out of the box - transparently import and run .ts/.tsx files
  • Single executable - no dynamic linking issues, no need to install OpenSSL etc
  • Standard library - common functions built on-top of web and Deno APIs that encapsulate much of the functionality of the top 100 of NPM

What are we doing today?

  • Explore how easy it is to build things for / with Deno
  • Create a library that creates greeting messages
  • Add unit tests using Deno’s built-in test framework
  • Format and lint the code using Deno’s built-in tooling
  • View auto-generated docs using doc.deno.land
  • Test the code in Node, and publish to NPM

The library

  • Generate greeting messages
  • Multiple greetings can be chosen through the Greeting enum
function greet (name: string, greeting?: Greeting): string;

Code

  • Set up VSCode workspace
  • Create project structure
  • Write library code

Luca demonstrates the process of setting up a project, using VSCode. He describes the steps as he goes.

Writing tests

  • Deno has a built in test runner `deno test`
  • Very simple interface, but allows for advanced capabilities

Code

  • Write test for `greet`
  • Use VSCode integration to run/debug test

Screencast of Luca doing the test process. He describes the steps as he goes.

Formatting and linting

  • Deno has a built-in formatter for JS, TS, JSON, and Markdown: `deno fmt`
    • Opinionated to ensure consistent style
    • Very similar style to prettier, but much faster
  • Deno has a built-in linter for JS and TS: `deno lint`
    • Does not check formatting
    • Catches logic errors
    • Enforces styling

Screencast of Luca formatting and linting the project. Luca describes the steps as he goes.

Publishing for Deno first: deno.land/x

  • Deno first module registry
  • Not a “blessed” registry. You can host modules anywhere. Even on your own domain!
  • Immutable (versions / modules can not be deleted)
  • Hooks into your existing GitHub workflow (literally)
  • Learn more at https://deno.land/x

Screencast of using the documentation tool. Luca describes the process as he goes.

First class documentation generation

  • Deno provides a documentation generator OOTB
  • Can be used in CLI: `deno doc`
  • Also available as a website: https://doc.deno.land

TypeScript annotations & JSDoc comments are used to generate docs right from your code.

Powers our upcoming global symbol search, and inline documentation on https://deno.land/x

Use from Node and the browser

  • Deno supports loading .ts files natively, Node and the browser do not
    • We need to emit .js files for these
  • Node consumes packages from NPM
    • We need to publish to NPM
  • Node does not support all web APIs
    • You might need polyfills
Introducing: DNT

dnt = deno node transform

  • Does transpilation to CJS and pure JS ESM for distribution on NPM
  • Automatically replaces globals not available in Node with ponyfills
  • Transpiles tests, and runs them in Node

Best of both worlds:

  • Develop using all of the built-in Deno tooling
  • Still make modules available to users writing for Node

Code

  • Setup build script to transpile code for Node
  • Run tests in Node
  • Publish to NPM

Luca goes through the process of running tests in Node publishing to NPM.

Conclusions

What did we do just now?

  • Build a library written in TypeScript that runs in Deno, Node, and browsers
  • Added and ran tests
  • Set up linting and formatting
  • Publish to deno.land/x and NPM
  • Needed no tooling outside of what the Deno project provides

Want to get started yourself?

Multicore JavaScript 🚀

Past, Present and Future WebDirections Global Scope

@ryzokuken

Photoshopped image of Ujjwal bestowing himself a medal.

Ujjwal Sharma (@ryzokuken)

  • Compilers Hacker at Igalia
  • TC39 Co-chairperson
  • ECMA-402 Co-editor
  • Node.js Core Collaborator
  • From New Delhi, India
  • Live in A Coruña, Spain
  • I love dogs 🐶

IGALIA 💖 BLOOMBERG 💖 GOOGLE

JavaScript is ancient

Photoshopped image of a dinosar at a 198s style PC with the cover of "JavaScript the Good Parts" on the screen.

Photoshopped image of Moses holding aloft a stone tablet on which reads "ship decorators please"

JavaScript was conceived in December 1995

  • Intel Pentium Pro
  • Max clock rates upto 200 MHz
  • 1 core
  • AMD Am5x86
  • Max clock rates upto 160 MHz ● 1 core

photos of old computer chips

Aged photo of two young boys sitting at a computer.

Today’s computers are very different

  • Samsung Galaxy S23
  • 8 cores
    • 1 x Kryo Prime
    • 3 x Kryo Gold
    • 4 x Kryo Silver
  • Newer Macbooks
  • 10 cores
    • 8 x Firestorm
    • 2 x Icestorm

"Retro Wave" style text reads "Lets talk about cryptocurrency"

"Retro Wave" style text reads "Lets talk about concurrency"

Each task takes some time to perform

Same chart, now the bars are labelled a, b, c, and d

I could do them all as you’d expect

Bars are now rotated 90 degrees clockwise, and places end to end. Beneath is the text "= a +b +c + d"

But if they were independent of each other, I could try something else...

Same as previous slide but text below is now a question mark.

If I could perform multiple tasks simultaneously, that would make things faster

the longer two bars, labelled b and d are now laid end to end, and below, bars laid a and c. Below is the text "= max(a + c, b + d)".

If I could perform multiple tasks simultaneously, that would make things faster

🧵

The same illustration from the previous slide, with a vertical line extending from the right edge of the top row. Text below reads "= max(a + c, b + d)"

I could try to make things more efficient if I tried

🧵

The bars are now arranged c and d in the top row and a and b below, so they difference in widths is less, and overall the process takes less. Text below reads "= max(a + b, c + d)"

This is complicated!

Let’s take a deeper look into our tasks...

Repeat of the 4 bars rotated and laid horizontally end to end, Text below reads "= a + b + c + d".

Let’s take a deeper look into our tasks...

The 4 bars from the previous slides are now unfilled. Inside are rthin rectangles the height of the bar represnting where the work of the task occurs. Text below reads "WHAT"

async function do(x, y, z)
{
    const a = fetch(x);
    const b = f1(a, y);
    const c = fetch(b);
    const d = f2(c, z);
    return fetch(d);
}
async function do(x, y, z) {    
	const a = fetch(x); // ⌚
	const b = f1(a, y); // 🐎 
	const c = fetch(b); // ⌚
	const d = f2(c, z); // 🐎
	return fetch(d); //⌚
}

Network calls aren’t the only sources of ⌚ But they’re certainly the most common

What if I kicked off all tasks at once and then executed whichever was ready?

The four bars, rotated 90 degrees, and largely unfilled are now stacked on top of each other. Again smaller rectangles inside show where the work of the task actually occurs. An arrow points to the start of the first taks, then the start of the second taks then the start of the third.

This is an task/event loop!

Two models of concurrency ♻

Web-like 🕸

  • Run-to-completion (Workers)
  • Message-passing (postMessage)
  • Async APIs (Promises)
  • No data races, data isolation

Thread-like 🧵

  • Sync APIs, manual synchronization (Atomics)
  • Data races, shared memory (SABs)

Web-like 🕸

The word "memory" appears inside a stylised cloud three times. Below each is an icon, of two arrows pointing in the same direction forming a circle.

Thread-like 🧵

A much more complex image. The word "memory" again appears inside a stylised cloud three times. At the bottom are two stylised clouds labelled "shared memory". Arrows point from the left memory cloud to the left shared memory. From the right memory cloud to the right shared memory and the middle memory cloud to both shared memory cloud. Between the memory and shared memory clouds are 3 triangles labelled execution. A lock icon appears above each of the shared memory clouds.

Reality

An illustration combining both the web like and thread like images.

Web-like

🕸 Goods

  • Ease of reasoning + using
    • Causal
    • Data race free by construction
    • Isolation
  • Asynchronous = smoother
  • Less focused on manual synchronization mechanics (locks, queues, etc)

Bads

  • Leaving performance on the table

Thread-like 🧵

Goods

  • WebAssembly interop
  • WasmGC interop
  • Good performance

Bads

  • Hard to reason & use
  • Manual synchronization
  • Data races
  • Acausal astonishments
  • "Must be this tall"
  • Exposes more timing channels
@ryzokuken

Photo of a balding man standing at a computer from behind. On the wall far above his head a sign reads "must be this tall to write multi-threaded code"

Two button meme image. Top panel shows buton on the left labelled "focus on web-like" and right "focus on thread-like". Bottom panel of sweating mand is labelled "TC39".

Top panel now shows pushing both buttons at the same time. Bottom panel shows the man now smiling with a thumbs up.

Phase 1

Web-like 🕸

  • Language support for asynchronous communication
  • Ability to spawn units of computation

Thread-like 🧵

  • Shared memory
  • Basic synchronization primitive
  • Ability to spawn threads

Phase 1

Actually, we’re done here ✅

  • Promises
  • async/await
  • Workers
  • SharedArrayBuffer
  • Atomics

Phase 2

Web-like 🕸

  • Ergonomic and performant data transfer
  • Ergonomic and performant code transfer

Thread-like 🧵

  • Higher level objects that allow concurrent access
  • Higher level synchronization mechanisms

Phase 2 🕸

Designed to address biggest observed pain points

  • Transferring data is expensive:
    • Transferrables very limited
    • Weird reparenting of [[Prototype]] even when transferred
    • Often copied
  • Transferring data is unergonomic:
    • Often requires serialization/deserialization
    • Identity discontinuity
  • Transferring code is basically not possible, we transfer strings

Proposal: Module Blocks

  • Aims to solve: Ergonomic sharing of code
  • Spearheaded by Surma
let moduleBlock = module {
	export let y = 1;
};

let moduleExports = await
	import(moduleBlock);
	assert(moduleExports.y === 1);

assert(await import(moduleBlock) === moduleExports);

Upcoming Proposal: Shared Disjoint Heaps

  • Aims to solve: Ergonomic and performant sharing of data and code
  • Let developers separate their heap
  • Agent-local heaps can point into shareable heaps
  • Shareable heaps cannot point into agent-local heaps
  • Unit of sharing is transferable heap

Phase 2 🧵

Also designed to address biggest observed pain points

  • Nobody knows how to use SABs and Atomics well
  • Impedance mismatch too high

Proposal: Structs

  • Aims to solve: Higher-level objects that allow concurrent access
  • Spearheaded by Shu-yu Guo
struct class Box {
	constructor(x) { this.x = x; }
	x;
}

let box = new Box();
box.x = 42;

          // x is declared
assertThrows(() => { box.y = 8.8;
});       // structs are sealed
assertThrows(() => { box.__proto__
= {} }); // structs are sealed

Future Phase

Web-like 🕸

  • Lighter-weight actors?
  • Integration with scheduling APIs
  • Concurrent std lib?

Thread-like 🧵

  • Tooling?
  • Integration with WasmGC
  • Concurrent std lib?
Special Thanks 🙇
  • Dan Ehrenberg
  • Shu-yu Guo
  • Organizers

Ta!🙏

@ryzokuken

WinterCG Github page: https://github.com/wintercg

Minimum Common Web Platform API page

fetch page of the WinterCG Github page.

https://github.com/wintercg