The Evolution of the Web and OffscreenCanvas

Brian Kardell: Hi.

Yeah.

Thanks.

Thanks for having me.

And thanks for entertaining my rather unusual proposal for a talk, which is about the evolution of the web, primarily through things about canvas.

It should be fun to talk about.

There's this sort of overarching theme in this talk, which is about, like where do web standards come from and how and why?

And the short answer is we really have no idea what we're doing.

I don't mean this like really critically.

I just mean it in a really honest way.

This is something that we have not really figured out yet.

And it's complicated.

Historically we've tried to make these standards committees, and most ideas that begin there ultimately fail to arrive.

And some things don't follow that path and they cut in the queue and the results are very mixed.

Either way.

We have tried different kinds of standards committees, total different standards organizations, and we learn and change a little bit as we go, trying to do better.

So the thing I'm trying to stress here is that none of this is fixed in stone.

And it's not heretical in any way for us to question how something was done and to think about how it can do better.

It's happened many times in the past.

So that is kind of what we're going to talk about here a little bit and what good things have we done and what not so good things have we done and what are some interesting developments and how could it go in the future?

So let's talk about canvas.

Canvas came about during one of those times where we were trying to make a lot of changes to how we standardize something.

We tried to pay a lot of different cow paths and we tried to do them from many different angles.

A lot of things with forms were pioneered in X forms.

SVG and MathML were picked up basically more or less in whole and put into the HTML.

Standard.

If you don't know that they're specially integrated with the HTML parser and that's supported in every browser We also took and looked at lots of data about websites, things, common classes that people used and built elements that tried to match those and their corresponding ARIA roles.

But another thing that we did that is really interesting is the introduction of canvas.

Canvas was introduced by Apple in 2004.

And it was more or less fully formed when it arrived.

And it was really interesting.

So they described it as an image element, but with programmatic API drawing surface.

You can see here on my screen, I have on the left side, a regular image element and on the right side, a canvas element and to an end user, you can see they are indeed completely indistinguishable.

And that's because I have used the programmatic API drawing surface to paint the image right onto the canvas.

And so in this way, It's totally accurate, but you can also use this surface to draw shapes, which is what I'm doing here.

The API for it is, as I say, it is an element and it has a method called getContext to which you ask for a particular kind.

And the one that we're going to talk about here is 2d contexts.

Now this also paves a cowpath, but it didn't pave the cow path that was developed in the standards committee, in the W3C or something that was being done on the web already.

What it did is it paved the cowpath of existing graphics libraries.

And so as such, this context has lots of things that you would find in a graphics library at the time and even still today.

So it's pretty low level.

You can do fills and strokes.

And it has things about rectangles and ellipses and triangles.

So it's pretty low level, but it also gives you access to very low level data, the actual pixel data.

So you can get the actual pixel data in the form of an Uint8 clamped array, which is a mouthful that is challenging to say in a presentation actually, but it's a really simple idea.

It's a special kind of array that all the values in it are just the numbers zero to 255.

And if you try to stick anything larger than that in there, there'll be clamped to one of those ends either zero or 255.

And these numbers represent pixel data.

Every pixel in here has four entries in this array, one representing the red, green, blue, and the alpha channel values from zero to 255.

So that's pretty cool.

At the time when Apple introduced it, they were also introducing this new Mac OS.

And one of the things that you could do was create these cool dashboard widgets, and here you can see, we have like things are doing graphical charting, they're drawing maps and irregular shapes and interesting visualizations.

And that was really, really cool.

But one way that this is very unique in kind of a history of the web is that it was a lowlevel interface and sharing a lowlevel interface as opposed to taking use cases and breaking them down into high-level interfaces is that it left really a lot to the imagination about how this would work on the web.

What we'd use it for.

So about this time, a lot of people began using it and trying to make demos and understand it and just do interesting things.

Some of them were not immediately obvious how they were practical.

And we're going to talk about one that we'll come back a few times in the talk and will sort of drive the concept of it will drive a lot of the talk.

So, this demo showed how evolution works by using the canvas.

And the idea is that it created an array, not the pixel array, but just an array of information that represented drawing instructions.

You can interpret those as how to draw a rectangle or how to draw triangles or whatever, but there were just abstract drawing instructions that were a gene.

And then you would realize that gene onto a canvas.

And then separately, you need a way to measure fitness.

And so there was some kind of interesting math involving taking the luminosity.

And what is the relative closeness of this pixel?

Any measure of fitness.

And if you have a perfect rendition like this, we'll have the fitness of one.

But if you, as I say, you can realize this, these instructions onto a canvas, and that will have a score of less than one, but then you just use this very simple calling those genes, mutate something, and measure again, clone, mutate measure again, clone, mutate measure again, and you favor things with better scores to be replicated.

And so using only these principles, if you were to run this code on your machine today, like I did here, you can get something that gets increasingly like the image.

So here I have something like 200, 200 triangles that evolve into a pretty good rendition of Vermeer's girl with the pearl But that took hours to evolve on my computer, which is better than potentially millions of years for real evolution, but it still leaves a lot to be desired.

So anyway, let's go back to the canvas itself.

Other things that people shared.

Jenn Schiffer, I don't know if you know who she is, but she does lots of interesting things.

And a lot of it is centered around pixel art or 8 bit art.

She made this thing a really long time ago, which I really enjoyed, which is basically like a clone of MS Paint.

But in the browser and I thought that was really, really cool.

Later.

She went to work at a company called Glitch and she shared this thing that is called pixelatize that lets you choose an image.

And in this, I'm going to choose an image from a still of a talk that Jen Schiffer gave called literally everything has pixel art.

Now, when I click it, you will see it takes a few seconds.

But what it does is it uses that low level of API and analyzes all the pixels and their adjacent pixels.

And it just draws out of squares a . pixelatized image version of that.

So that's pretty cool, but Glitch also has this really interesting feature that is also a little bit like evolution.

You can share an idea and it's very easy to clone, they call it remix and just jump in and make some changes.

And so I really liked this idea and I did that.

And in doing that, I two things, one is I talked to a coworker, Chris Lord, who is very, very experienced with graphics libraries.

And he explained to me a really clever way to make this very efficient.

And so when I click now you see that it's almost instantaneous.

That is an idea that I mixed in with Jen's idea.

I just did a simple mutation, but then I also add these sliders now that it's really efficient and happens in real time.

And you can see that you can change how much the color snapping happens or how big pixels are.

So that's just like the smashing together of different ideas and different backgrounds leads to some pretty interesting things.

Another demo that I've seen people do that I think is interesting with canvas is to transform the frames of a video as it's playing.

Now, just to be extra, extra, extra meta, we're going to take the pixel at a time.

algorithm and apply it to a video, a live of gem, shivers, literally everything is pixel art and turn every frame, into pixel art, which I think is super meta and really fun.

But it's potentially not extremely obvious how this is practical, but there are many, many, many practical things that we do today with canvas.

And this is one that is really advanced photo editing.

They can do.

This is indistinguishable from a desktop application.

It's so complex and nuanced, but you can imagine this very clean, evolutionary path, right from Jen Schiffer's original sort of MS paint, clone to ideas, getting more complicated, learning from one another, evolving, getting a little better, borrowing ideas from over here and smashing them together over there.

And we arrive at really great things like this.

When I talk this year to a bunch of people about canvas, one comment I heard is like, well, that's really good for photo editing or something like that.

But I'm thinking like that's not most of the web.

Like I use the web all the time.

I write things for the web and I've never had to write something for canvas.

That's pretty niche though.

Right.

It doesn't really apply to me.

Well, the HVR archive collects data on the home pages of websites, millions of home pages.

And you know, that video on pages is not remotely rare.

It's pretty common.

People want to have a nice splashy video on their homepage.

Canvas appears almost as often, very very close to as often as video.

Now, probably in your head you say, well, that does doesn't add up, but it does because for example, maps are a heavy user of canvas, all of those shapes and the parks and the rivers, all of that is drawn with camvas.

Now you don't have to write the canvas, but somebody else does the work.

So chances are pretty good that your dataviz stuff, or games that use or maps that you draw or lots of other, very common things that might appear on your homepage you're using through a library that uses canvas.

The interesting thing about this is that we build up this complexity and we, not everybody needs to learn it.

We build these layers on top of one another and things get more complex.

But today, most uses of canvas are actually really complex, they're maps and graphical editors and like very complex things.

We have a different kind of problem there, is like, how do we, how do we pave that path?

Is there a path there to pave?

Can you pay Photoshop in the browser?

Probably not.

So now what cause that problem is not, that is not a better problem that's just a different problem.

So around 2014, a number of us were talking about.

And we were saying " look, a lot of these evolutionary things seem to happen.

But we seem to fight them and it seems like it would be much better if we could figure out how to really lean into them".

And we wrote the extensible web manifesto.

Now it's a short document.

I think it's frequently misunderstood.

There have been volumes written about clarifying some things in there.

But this is a piece that Alex Russell, Yehuda Kayz and I wrote for Smashing Magazine called laying the groundwork for extensibility.

And we mentioned canvas in here, which we'll come back to in a minute.

In this, I think the important part is that we absollutely want, we desire of a high level features, but it's not that we only desire the high level features.

And there's a question about how we get there and what we get in the end from it.

What we really need is not only the high level features, but those features need to have an architecture beneath them that allows us to rethink and remix and adapt ideas.

In this regard, what we were saying is that thus far, we had a lot of tech debt.

So one of the pieces of tech debt, that's not the one that we wrote about, but that is relevant to canvas is that when it was introduced, there was really only the main thread.

And so everything was imagined as it was on the main thread.

Now what this means for canvas is basically demonstrated in this demo, by Andreas Hocevar on open layers of popular mapping library.

What you can see here, this is by the way on a fairly powerful device, is that if you drag and pan and zoom too quickly things don't render nicely, but also, if you could really pay attention, it gets jerky.

That is your input is also blocked because the main thread itself is blocked.

That's a really bad experience.

Now I know I just said this thing about performance, and I know that when people talk about performance, a lot of people nod their heads and some people agree and some, some people work on it really hard, but the truth is we have not really meaningful.

We have not really meaningfully, move the ball on this very much.

People have listed lots and lots of reasons that you should care about this.

Potentially there's rural customers that you're not reaching out to.

What about people with slower wifi?

The mid range phones they're very, very popular are not what you think you're testing when you use your iPhone or your expensive Android phone.

Other people have taken another take on that, which is you're missing potential customers around the world, maybe in developing markets, where it's even worse than that, it's really, really, really low end hardware.

And why would you want to leave those people behind?

It's not good for you.

But here too, we have not meaningfully moved the ball.

So.

I don't want to focus on any of those things.

You know the arguments and I hope that you heed them.

But if not, I would like to add a data point, which is ask not for whom the performance bell tolls, because increasingly it tolls for you and you don't realize.

And that the reason that you don't realize it, because when we talk about the web, we're not currently thinking about like the button for the internet.

What is that thing that you click on your desktop or on your cell phone?

Those are the contexts in which we talk about these things, but the reality is that those, you know, there's not that much more room for them to grow, but what is really growing is embedded systems, TVs, streaming devices, game consoles.

Smart appliances, your refrigerator maybe, or this cooking machine.

This is the Vework Thermomix TM 6.

That little screen on there is a smart screen and in it is embedded a web engine.

So when you use those menus on your PlayStation or you navigate things on your TV, all of those UIs are built using web technology and an embedded web engine.

And increasingly you will experience many of them per day for the two that we normally talk about.

They all have something in common, which is that their hardware is generally cheaper, lower power, and also updates more slowly.

Another thing that's really interesting about them is that many of them, their primary interface that they sit on most of the time is more inclined to be like those Mac OS dashboard widgets than it is the ebb and flow of the normal web that you use all the time.

Now that's not to say that there isn't a ton of things that use interfaces more like the normal web that you use all the time.

It's just that these are extra important to those and they're the least capable.

So my company, Igalia is very keenly aware of this and we care about it a lot.

And one of the reasons that we care about it and we're keenly aware of it is because we are the maintainers.

We make the WPE the official WebKit port for embedded devices that powers all those things.

So we really want our users to have a really good experience.

And we're interested in working on lots of things in that regard.

And one of the things that we're working on is OffCcreen canvas, which is a thing that was innovated by Google to fix some of this tech debt.

And it has this really simple API that says, allow me to break off control of that DOM element and allow me to then transfer it even to a worker.

And so what the main thread is responsible for is just sort of drawing a rectangle.

And the paint part of it is, can be done elsewhere off the main grid.

That's kind of, it that's all API, other than a constructor.

That's why more of this talk is not about Offscreen Canvas, because there's not that much if the API to talk about other than its effects and why it's important.

So let's look at its effects.

This is the, the demo that we talked about before-our our evolution demo.

And here you see, I have this main thread speed that being reported.

It's about 60 frames per second.

And we have this animation, that's just smoothly moving these stars.

Now, if I were to run this on the main thread we would see the same kind of interruptions that we saw with the canvas.

It would eventually just clog up the main thread.

The thread speed would jump way down.

On my particular laptop, it jumps down to about 15 frames per second.

In that animation, it gets really jerky.

But if I run this in offscreen canvas you can see that those are completely unaffected because these are running two completely different parallel programs, more or less.

But one thing you'll notice is that.

The work takes as long as the work takes, the whole thing, doesn't automatically get faster.

What you do is break off control of the ability to paint on that without interrupting the main thread.

But now that we can make use of effectively multiple threads, multiple workers, if there are opportunities to parallelize work, we suddenly can, which is a power, like a fundamental power that we never had before with regard to canvas.

So a really nice example of this is fractals.

So this is the Mandelbrot Set.

And the nice thing about fractals is that you can just plug into the math and render this at sort of any level of pan and zoom and it is computationally very complex, but the code is very, very simple.

So that is exactly what this is.

This is just really nice, pretty coloring of the Mandelbrot set at a particular pan and zoom.

Now, unfortunately you can't see my my pointer, but this is panable.

Un-doable just like a mouse.

So if I touch and I attempt to drag left, you will see nothing happen.

And then suddenly we'll get a big jerk.

And if I were to attempt to zoom in just a little, you'll see nothing happened.

And then, wow, holy cow, we just jumped away in.

That is because it is all operating on the main thread and my input is blocked while that's happening.

So we don't know how things can render until they're all done.

But now, because I say this work is highly paralyzable, we can not change anything.

We didn't add any smarts to the problem other than making it possible to just divvy up the existing work.

So when we do that if I zoom in or zoom out.

You will see that this is very, I mean, it's as smooth as I can make it with my touch pad.

It would be as smooth as it could be more or less.

And this is with just utilizing basically four workers on my laptop.

And my colleague, Chris Lord has like a 40 core machine.

His is like even more amazing.

You can also drag it to the left and right.

And you'll see that the only thing that you'll kind of notice is that we get these white edges that quickly fill up.

The reason for that is not inherent in the problem is just simply we didn't do anything special other than parallelize the work.

So now we actually move and paint like things that were not there before more quickly than it refreshes.

So we could make this just slightly smarter and paint a little bit off screen so that when we move it left or right.

We're just filling in the data that's already there.

And that wouldn't be hard, but we didn't have to here, we're just demonstrating that you can just very take a highly paralyzed problem and change nothing and still get a really good result.

So that is nice in theory, if you need to try fractals, which probably you don't, but how does this apply to canvas?

So canvas has this apply to maps.

So maps are almost exactly the same problem.

They are very, very, very highly paralyzable and you can just divvy up the work.

And that is what this demo also created by Andreas Hocevar does it just takes that work and only parallelyzes it.

And you can see that the effect is more or less the same as we had on the other.

It there's a little bit, that's not rendered off screen.

They could improve that the same way, but the interface, the main thread is not locked.

Your gestures are not locked.

Everything is just nice and smooth rendering animation.

So that's really cool.

And again, that's the difference on really high-end hardware, but if you were to take this and put it on considerably lower end hardware, you can imagine just what a big deal this would be.

And you might have experienced this.

If you used something in like an automobile or on a train or, or something if they had maps, you might have experienced something like that.

That's why.

Offscreen canvas makes a really big difference there.

So now that we have more cores and more abilities, let's actually change our evolutionary experiment a little bit, to work a little bit more like how we come up with really complex things.

So the complexity is not fixed in evolution.

Genes can get longer.

They can get shorter.

And in most complex organisms like humans, we have sexual reproduction where ideas stitch together, we get the ability to pick up, like mixed together.

ideas.

So this is really powerful, and this is effectively what I did in this demo.

So we have actually multiple experiments running here, but they're, they're sharing and they're smashing together and occasionally they gain another gene.

And if that is beneficial to their fitness, then they keep it.

And if not, that dies out.

Now as I said, previously, that last rendition of the same picture, took several hours to achieve.

But here.

I have collapsed the time from, I think it was 10 minutes maybe.

And you can see like what a powerful thing, this kind of algorithm, this ability to have these characteristics is we arrive with less triangles at almost as good, maybe even better rendition of the same picture in just a few minutes.

So.

I think it was pretty awesome.

I hope you also think it's pretty awesome.

If not, thanks for listening anyway.

But I think that this is actually a bigger story.

This is how we build up complexity in ideas too, this is how ideas work and inventions, whether we want it to or not.

So if you ever seen any of my talks, this, I think this slide is in almost every single talk that I give, because it's a great story.

When houses were originally wired for electricity, it was for the purpose of artificial lighting.

There were no outlets because there was nothing to plug in.

So people made use of the fact that electricity existed and you could get it.

And they used that socket to create a whole electrical industry.

So people tinkered and they created fans and toasters and refrigerators and stoves and coffee makers and all kinds of things.

And many of them were very bad and some of them were just downright dangerous.

But what happened very, very quickly is that we learned from each other and we very quickly determined how we could make a more efficient, and safer motor for example.

And as we learned about things and we got more sure how to do it, then we kind of standardized on those.

And that's really interesting.

So whether we want it to, or not with the right tools and some architecture in place and time, we do always seem to apply the same kind of evolutionary pressures anyway.

So what if we make the web more like this?

This is an article that I wrote in 2015 for Opera.

That was about Houdini.

So if you don't know about Houdini, Houdini is an effort to explain the magic and expose it to us about how CSS actually works.

CSS is very magical.

It involves lots of different faces and steps and you just have to take it all or nothing.

But Houdini kind of changes that.

So Houdini breaks down all of these big things and the architecture that CSS does and lets you kind of plug into them and not have to reinvent everything else.

So CSS has this concept of painting for backgrounds and things, and you can register with Houdini, a small script that is a custom paint, and then you can write your CSS and you can give it custom properties.

There's lots of cool things that you can do, but.

Like you don't have to do anything more than that.

You plug right into the existing architecture of CSS, which is really cool.

And people can use that to imagine all sorts of things.

This is a website that has many, many, many experiments where there are custom paints and like input, sliders and variables that you can plug in to do all kinds of things.

Many of which, to be honest, like I could not even imagine myself.

Which is really cool, but I can very easily use them by plugging in a simple script, and know that that script is only painting.

So what's interesting about this is as we build up different pieces of the platform, we can build up complexity by pointing to other things that we've already defined.

We can point to existing magic and not have to reinvent that.

So in Houdini, those custom paints apart worklets, which are the core part of workers.

And they effectively are a function that you register, which receives the canvas context, so you can draw on them.

And the relationship between offscreen canvas and Houdini custom paint is very, very, very clear.

So I think that that is like really, really cool and powerful.

Like we don't have to reinvent major parts of the system we can point to and reuse.

So this is actually kind of the point of that tech debt we're talking about in the smashing magazine article.

In that, the thing that we pointed out about canvas is that it was introduced as an image, but with a programmatic API drawing services.

And that is kind of only partially true, because if you wanted to imagine something that was almost an image was maybe a better image or a slightly different image.

How much of the web platform underneath images would you have to actually recreate?

It's not just the drawing of those shapes.

It's much more than that.

At the time of the smashing magazine post, it was really almost all of it.

It was pretty much everything.

But today we have done a lot of that paying down of tech debt and we have created something where today I can present something as a single element.

Again, we use custom elements with upgrades and containment and shadow DOM if I want, I can expose different parts for styling.

I can plug it into the image Preloader.

I can use fetch and we have a very predictable and exactly as the platform would do it HTTP experience.

I can use promises, touch, async even web codecs are not on this list, but this list is actually quite long if we shared all of the things.

And we don't have to reinvent any of this.

We can just use it.

We can plug into it.

We can point and say, what I need is this.

And that's how I use it.

So that's a little bit abstract, I guess.

It might be difficult to imagine if you haven't thought about this a lot.

Like what what would you mean by a slightly different image or a better image?

And this is one that I want to actually, which is I am a painter myself thus my Vermeer.

So, you know, we have pan and zoom on the web, but it pans to zoom as a whole page.

And what I really want very often is not to pan and zoom a page.

I want to pan and zoom and image, and that comes with its own concerns.

Cause like when I zoom in, I want to load more data.

The web doesn't have those things today.

But it is actually really easy for us to use what we have and imagine them.

And so we can do that.

This is a custom element that I made that allows you to write a small script sorry, allows you to include a small script that just defines this custom element.

And it uses all of the pieces of the platform that already exists.

So it would work very much like this but not quite so here's, what's special about it.

This uses progressive enhancement.

So the image itself is input into this and if your JavaScript fails or something, you'll still get a perfectly functional image.

But this is really nice.

You can include a very small amount of code and you can get this experience for your users.

Now what's really interesting about this is not just that I had an idea and I made a custom element, but now as we get that out and we publish it and we share it, we discover new things.

So we can improve them.

So when I first made this, I was really thinking about my phone where I really frequently want this ability-or or my tablet.

But in sharing this and discussing it with some folks, I realized that not all screens are touch screens which just like seems very obvious in retrospect, but was a thing I completely missed.

And don't those people also want the ability to, you know, maybe zoom in and see that higher level of detail and be able to pan around?

And the answer is yes!

So you know, given that I have this thing, and that it uses the platform itself that's highly shareable and easily customized and changed.

Very quickly we smashed together some new ideas from other places and, you know, added these affordances if you don't have a touch screen to make it still very valuable and useful to you.

And I think if we were to share that some more, we would also improve it very rapidly.

So now that we have these powers, we can also use them and we can try new things.

And one new thing that we're trying is open UI.

It Is a collaboration with lots of big companies-Google, Microsoft, Salesforce and lots of makers of libraries and frameworks for UI, but also just lots of web developers.

And what we're trying to do is do all of the research out in the open and have the discussions out in the open and propose how we would build the next elements into the Web platform.

So that's really interesting.

My picture thing is not in here but it really shouldn't matter where an idea comes from if it's good or not, what matters is really it's fitness.

If everybody gets it and use it and proves that it's good and it has staying power, that's what we really care about.

Once we do have kind of a winner of an idea, we need to make it like this so that it's very easy to pave the path.

We don't have to invent everything else that's already in the browser, we can point to the existing things and say "it uses fetch" and "this is asynchronous with promises" and you know, so on.

So what's interesting about this, I think, is that there is a model for this that works, in theory could work really well, which is dictionaries.

Alex Russell gave a talk in, I believe it was 2012 at Frontiers where he talked about this.

There is a model that is not a standards organization that kind of standardizes language.

Like it's a pretty important thing if you think about it, but the way that dictionaries work is not that words are invented by a committee, unless you're saying the committee is literally everyone.

Anybody can invent a word.

Anybody can invent some slang and if other people kind of grok it and think that's a good word and they start using it, at some point it will enter the common vernacular and we'll have data.

We'll know what it means that everybody agrees.

It kind of means that, or at least a really large segment.

And then, you know what we do, we write it down.

So that's how a dictionary works.

So over the past few years, we have also worked with the HTTP archive to gather this data.

We would need that data to know what is the slang that's arising.

And In the past two years we changed the way the HTTP archive collects its data about elements.

And we now record it all the time.

I made this little tool that you can use to cheaply and explore, cheaply and instantly explore this with regular expressions about elements.

So just for example, we can exclude native HTML elements and say anything that imagines a regular expression that contains "image", or IMG or maybe 'picture', like what custom elements are out there that have these in the August 21 data set and you can see, wow.

There are actually, really a lot of them.

All of these are imagining something special about images and we can actually drill down into this and see how many uses and what URLs there are and everything.

And we can also reference them and create permalinks to share them via Twitter or email or whatever.

But now actually we are recording this data every month.

For the past two months this has been going on before, that was only like one, a couple of times a year, but now we are actually collecting data.

And what's interesting is once you have data, you can build other things like this is the thing that I only recently started working on, which just watches that archive data for trends and things.

So, this is a thing, I don't know if this is a good metric, but it's an interesting one that there are 518 custom elements that appear in the HTTP archive in all four data sets and in every data set, they have more occurrences than they did in the previous dataset.

So that's interesting.

It would be interesting to watch a trend like that over time.

Now, all of these, every single one of them appears on less than 2000 pages, total, in the massive dataset.

So I don't think that any of these are like properly serialized slang right now.

Like we should not probably rush to standardize any of these, but now we have our, the ability to develop the tools to watch this.

And now that you know this, and now you think about this go do it.

Go invent the slang of the future.

Don't just make another library, convince other people to use your custom element, convince people that this particular one, if you, if you find one that works really well go out and evangelize it, and if we can build up the slang and we can have the data to prove that we can build up the slang, and that is really, really close to the platform.

Then we'll just write it down and that should be really, really great.

And we should know that it's really great.

It won't be a mixed result because we already know what the result will be.

So that is the the vision that I hope we achieve.

And I think we've come a long way to fulfilling it.

I hope that it makes you excited as that it makes me, but regardless.

Thank you very much.

The evolution of the web and canvas

Brian Kardell

Igalia, September 2021

Where do web standards come from?

picture of a dog using a computer with the caption "I have no idea what I am doing".

  • It's complicated!
  • Historically
    • Most ideas ultimately fail to arrive
    • Results are very mixed on ones that de
    • Some jump the queue
    • Results are very mixed on those too!

IM JUST SAYIN' YOU COULD DO BETTER

Let's talk about <canvas>

HTML5 logo

img & canvas

Animated demo of the functionality of canvas-based Image evolution tool using two side by side images of the Mona Lisa

const canvas = document.querySelector('canvas'); const ctx = canvas.getContext('2d'); ctx.fillStyle = 'green';
ctx.fillRect(10, 10, 100, 100);

Literal pixel data: A Uint8ClampedArray representing a one-dimensional array containing the data in the RGBA order, with integer values between 0 and 255

let imageData = ctx.getImageData(sx, sy, sw, sh).data;
/* [red,green,blue,alpha,
    red,green,blue,alpha,
    red,green,blue,alpha,
    ...
]

screenshot of Mac OS X widget dashboard

Sharing a kind of low level API left a lot to the imagination...

img & canvas

Animated demo of the functionality of canvas-based Image evolution tool using two side by side images of the Mona Lisa

two images side by side. On the left is a reproduction of Vermeer's Girls with the Pearl Earring. Beneath this is the text "Goal image". On the right is a version of the image composed of triangles of various shapes. It is however clearly identifiable as the Vermeer painting.

Let's go back to canvas...

Image of a digital drawing tool based on MS Paint but in the browser - a pixel interface has with colored squiggles overlaid and various boxes of editing canvas options

Screenshot of Glitch's 'pixelatize' tool interface

Demonstration of the pixelatize tool using a screenshot from Jen (the creator's) presentation about pixelation. The screenshot is heavily pixelated.

Further demonstration of pixelatize application showing pixel size and color snap capability that Brian has enhanced with a slider tool which allows real time changes to the image

Video demonstration of the pixelatize application algorithms applied to each frame of Jen's video presentation to make the entire video pixelated.

Demonstration of a <canvas<> interface showing a Photoshop style image rich with colour and detail

<canvas> is pretty niche tho, right? I've never used it?

<canvas> is in the top half of elements in the HTTPAchive ~ on par with <video>

image of a Google map of Pittsburgh

Many uses are pretty complex... What do we improve or pave?

2014: Can't we do better than this? How can we lean into the evolution that seems to occur?

Screenshot of a piece written by Brian and a colleague for Smashing Magazine called "Laying the groundwork for extensibility"

We want high level features. We need and architecture that lets us lean into this and rethink how we get there and how we can adapt and remix ideas. We have tech debt.

OpenLayers vector tiles animation, showing the jerky performance and less than ideal UX

Plenty of talks on why perf matters/low end

shrugs emoticon

... but what about....

shrugs emoticon

For whom the perf bell tolls...

"The button for the internet..."

icons for IE, Chrome, Edge, Firefox and Safari

  • TVs and Streaming Devices
  • Game Consoles
  • eReaders
  • Smart Refridgerators, etc
  • Car, Train, Airplane Infotainment
  • Digital Signage
  • Point of Sale
  • Kiosks
  • GPS Devices

To the right of the list is a small image of a Thermomix TM6 with embedded smart screen

Embedded browsers

  • Hardware is lower powered
  • Hardware updates more slowly
  • Many interfaces more inclined to canvas/SVG*
  • (..Just like those MacOS widgets!)

Screenshot of Igalia's WPE (WebKit port optimized for embedded devices) info page with a diagram of the port architecture

Enter: OffscreenCanvas

Free the main thread!

// 'break off control'
let offscreen = canvas.transferControlToOffscreen()

let worker = new Worker('myWorker.js')

// send it to a worker!
worker.postMessage({
  offscreen
}, [offscreen])

... and... that's kind of it. (+ a constructor)

img & canvas

Animated demonstration of the OffscreenCanvas Image evolution tool capability showing a range of settings being applied to an image of the Mona Lisa

If you can split up the work, new possibilities

image of the whole mandelbrot set

Animated demonstration of the OffscreenCanvas tool applied to the mandelbrot set example of fractals

OffscreenCanvas

Video of the OpenLayers vector tiles animation with OffscreenCanvas, demonstrating how much smoother this is than before.

With more cores, let's change evolution experiment up to work more like the evolution of complex organisms...

Video demonstrations taken from Brian's web site

Exploring the adjacent possible given the tools we have, and the smashing together of ideas is how we build great things

black and white photo of a fan plugged into a light socket

With the tools and architecture and time.... We apply the all of same evolutionary pressures.

screenshot of Brian's 2015 article for Opera: "Sex, Houdini and the Extensible Web"

<script src="https://unpkg.com/extra.css/confetti.js">
<style>
h1 {
/* It can define/use properties */ 
	--extra-confettiNumber: 80;
	background: paint(extra-confetti);
}
</style>

Houdini paint worklets share the same magic!

screenshot and demo of Houdini.how worklet library

As we expose things, we can build up complexity by pointing to existing magic... It's a canvas context.

What is the machinery that makes images work? If I wanted something that was almost an image - how much do I need to recreate?

At the timesmashing of that Smashing Magazine post, it was "most of it".

Today, I can just point to and reuse things rather than reinvent...

  • presenting as a single element
  • upgrades
    • containment
    • exposing parts for styling
  • image preload
  • fetch
  • touch
  • async rendering

video of pannable/zoomable images in action

<script src="pan-zoom-img.js"></script>
<pan-zoom-img>
  <img
      crossorigin="Anonymous"
      width="640"
      height="427"
      src="..."
      full-res-src="..."
      alt="some alt text"
    />
</pan-zoom-img>

Screenshot of a summary article about "Pannable/Zoomable images with hi-def lazy load"

Screenshot from the Open UI proposal/draft webpage

It's considerably easier to pave paths like this...

Photo of a person leafing through a dictionary

Screenshot of the HTTP Archive front end.

Animated demo of the HTTPArchive DOM Explorer tool interface that Brian created

List of data that the tool search results returned showing 518 custom elements appear in 4 HTTPArchive data sets and increased each month

Go invent the future, we'll write it down.

Thanks