Opening title.
So, Marcos Caceres, he's moved here to Melbourne recently. He grew up in part in Brisbane, but he's lived all over the world.
You were born in Argentina, were you? - Right.
- So he's at Mozilla.
He has previously been on on the W3C's technical architecture group, and at Mozilla, he works on DOMImplementation. He is also, along with Yoav Weiss and Chris Wilson, and if you don't know who Chris Wilson is, just go look it up.
You should.
He is now on the Chrome team but originally IE, from IE3 onwards, and one of the most instrumental people in the history of CSS in many ways.
So the three of them are the coaches of the Web Platform Incubator Community Group.
So that's something you should check out as well. So Marcos spends a lot of his time standardizing technologies but also implementing them in the Chrome code base, and he's going to talk about something we've covered quite a lot at Web Directions events over the last three or four years but is becoming increasingly powerful and amazing, Service Worker.
So would you please welcome, Marcos Caceres. (applause) - Thank you. Thank you.
Just testing if this, yeah.
(chuckles) So, yeah, thanks for the great introduction. You just, like, ruined my first line, no.
(laughs) So, yeah, I work for Mozilla's DOM team, and as John mentioned, I've been doing web standards for getting onto nearly a decade now.
So my primary interest in web standards and also how I kind of got involved with Mozilla was around this idea of being able to take web technologies into basically offline, or do something comparable to web applications. So I've been focusing on that for the last decade. Our first approach or at least my first approach was with W3C Widgets.
Who's heard of the Widgets? Good, nobody.
There, one person.
So that went on to become Cordova, Adobe Cordova. Oh, was that just, "Pity, I'm going to put my hand out." (laughs) So, anyway, who's heard of like Firefox OS? Yeah, some people.
So, that was another attempt, like that was Mozilla's attempt to solve that problem.
And it was quite an ambitious project, and it didn't turn out or it didn't pan out how we had hoped.
So, but off the ashes of, like, the widget standard, so things are basically packaged, web applications, this technology, the service workers technology, emerged. So another thing that emerged from these processes that we went through, and as you heard, as John mentioned, there's this Web Platform Incubator Community Group, and all these activities that have been emerging. And so, we've been learning along the way how to actually make standards, and how do we make standards for the web.
We didn't really, even though we work on standards, we didn't really understand the web.
And the reason we didn't really understand the web was because HTML5 is really a reverse engineering of IE6.
So IE6 was basically 90% of the market.
It was not an open source project.
And basically its behavior, so how it parsed HTML, how it dealt with quirks and so on, went on to become what the HTML standard is. That's why it's so big, filling, again, like an 800-page bag or something like that. That is basically, like, "Okay, here's how browsers "kind of work, but here's how Internet Explorer "really works." And that's what we basically need to copy because that's what basically everybody expects. So, from that, another thing that we learned is like, who's used...
Everybody's used the DOM, right? Who loves that API? Yeah, Neil loves that API.
So, it's a terrible API.
And why is it so terrible? So, it's because it was made when Java was the new hotness, right? So after 1996, the committee got together and it's like, "We need to be able to make the DOM standard "but we should be able to implement it in Java "and C++ and whatever," which is a great idea in principle. But Java and JavaScript are vastly different. So Java only last year got the idea of closures. Anyway, from that, we realized, made lots of APIs, and the community went, "You know, we really kind of suck "at making APIs.
"We're really bad at it.
"So we should really not focus on trying to make "fancy APIs.
"What we should do is just find what the perimeters are "and then just give those perimeters to developers. "And you folks can have fun." So that's where this Extensible Web Manifesto idea came. And essentially, service workers is a kind of embodiment of those principles where, like, "Okay, we suck, but we're going to do our best, "but we're just going to give you very, very "primitive things and you have to kind of deal "with the complexity of them because they're "very, very simple.
"And from very simple things, you can build "very complex systems." So, sorry but not sorry.
Service workers are a bit hard.
And I'm going to try to, with this talk, show you that they're actually not that hard and that you can approach them in various ways. So, who's actually here use source implemented at a service workers site or a site that uses service workers? So it's like one and a couple of people down the back. Okay, so relatively new technology.
So if you've actually, if it's piqued your interest at some point, you've probably seen this.
So we have navigator.serviceworker.register. What might be fun for you is like, if you have a laptop, you can actually code some of this as we go along.
So we're not going to do like the service worker bit, but we're going to do a little bit maybe later. You can try some of these APIs.
So what is the architecture of the service worker? So it's essentially a worker, so it's like, you have your normal JavaScript running on the main thread, and then you can spin off another thread which is essentially a service worker. So it's literally just a little daemon, a little thread that wakes up that sort of work for you as needed, and then it goes back to sleep.
It's not long-lived.
It doesn't hang around.
It just runs once and then it shuts down.
So that's essentially the architecture.
So what does it actually do? It's really just a glorified event handler. That's all it really is.
All it does is it allows you to catch events. So it has this life cycle.
So when you first do the registration, you get an Install Event, and it gives a little event object, and you can deal with it.
So typical things that you would do here is like view all your caches, do any checks, like, "Do I need to, am I being updated," and so on. And you can purge out things that you might already have to do offline things.
So activate is anytime.
Remember how I told you how it wakes up and shuts down? So if it wakes up, you get the activated event. Fetch is probably the most interesting one. Fetch represents any time, so an image or any network request has to go through Fetch.
So anytime the page tries to get something, Fetch will be invoked.
And you can actually grab that Fetch request and you can manipulate it in any way you want, mostly. There are some obviously security restrictions there, which I'm not going to cover, but that's the essence of it.
And then you have the more fancy ones.
So for Push Notifications, for instance, you get a Push event.
And then there, you get the event object with a little bit of data.
Very simple.
And then, for Sync, for Background Sync, which is an API that we're currently implementing in Firefox and it's already I think implemented in Chrome, so you just get a little Sync event, which, again, it's super simple.
It just has a bunch of tags, and then you say, "Okay, this sync is for this thing.
"Now I can make a request." And that, again, runs specific times.
So for instance, the user walks into a wi-fi enabled space, and then they get that event and you can basically send requests up.
So there's no super complicated stuff.
It's just events.
That's the takeaway from basically that.
So, going back to the page, when you register, you get this registration object.
So let's talk about registration.
So let's behold the registration object.
As I said, it also mirrors the life cycle events. You can do Push Registration on there.
So if you were actually to go and look at it, you'll actually see there's a Push Manager and you can actually register for Push Notifications. How many people have actually implemented a push server here? There's one.
I'm impressed.
So one person.
But the reason, I've tried to do it as well, it's really freaking hard.
There's a lot of work to actually get that going. There's a lot of coordination with other companies and you need to find a service provider, and so on, to send that Push Notifications.
And in the future, we'll add a whole bunch more stuff. But it's a lot of work.
So we don't want this stuff to be daunting. So there's one weird trick you can use.
(laughter) And I only learned this weird trick, I was working on a project with service workers last year. Two months into the project, after a lot of pain, I learned this one weird trick, and it completely changed the way I develop with service workers.
Most of the APIs are available on the window object. You don't actually need to create a service worker. They're all there for you.
Again, if you just see the service worker as a place that receives events, then you can basically say, "Okay, that's all it does, but I can do "all the service worker stuff, like the hard sell. "I can actually just do that in the window object. "I don't need to go through all the service worker "setup pain." So this enables a whole bunch of really interesting things because you can experiment with stuff and get up and going really quickly.
So what are the actual building blocks? So there's the Fetch API.
So who's used Fetch? A couple of people, okay.
So you're going to see Fetch.
It's super easy.
So then there's the cache API itself, which I mentioned a couple of times.
Then you can extend that and get into notifications and also the push API for push messages, and so on. So that's a whole bunch of stuff we're adding in the standards world that you will essentially be able to use both on the window object and also in service workers.
So if you've never seen Fetch and you want to bring up your deb tools, you can just try.
Just put in a URL.
So go to google.com and just try Fetch.
So Fetch returns your promise and that promise has a response, and then you can just deal with the response.
So we're going to talk about in detail what the response object is.
But the example here is just essentially, you make a Fetch request.
So we can also model requests out to the network or even to basically any URL by actually having a proper API.
So there's actually constructable object which is the request object.
So that takes a URL and a bunch of options. So, like I said, a URL.
The method that you're going to send out.
So Get, Post, Head, and so on.
What course mode.
Whether the thing should redirect.
You can actually control manually if it redirects, and so on.
You can modify the referrer and you can set the referrer policy.
If the referrer doesn't mean anything to you, it might be okay.
There are places where you might want to modify the referrer policy, which is basically, which page did this request come from.
It just gives you that information.
And most of the time, it's just goinG to be the origin.
For instance, if you wanted to track for analytics purposes within your own site, okay, they're navigating from A to B, and then B to C. You could actually see which pages, and that's called unsafe URL because it's unsafe. You don't want to do that (chuckles).
So, and you can also set custom HTTP headers as well. So, here's a fancier example.
So you get that requesting slash test.
We're sending the referrer, and there's the unsafe URL, essentially the referrer policy being ordered. So, that, if you were to actually feed that into the Fetch API, you would see, for instance with just the referrer stuff, you would see it come up and reflect it as you would expect.
So you can see that additional headers are added in, which are just the standard browser ones.
So when you get back from fetching, when you make the request, obviously you're going to get a response.
So, again, this is just a simple example of, "Well, let's post." And here we could send, we could post in the body. We could JSON.stringify an object.
So you can see, composition of basic APIs to create or represent a primitive which is a request. So let's move on to now what we get back.
So a response.
Again, a response will have a URL.
So imagine you got redirected.
So it might have a different URL to the request. The type, whether it's a basic, whether it was a network error, opaque, if you tried to go cross origin, and so on.
If it was redirected, the status code that you got back. A really cool one is dot okay.
So you can say, "If this response is okay, "do all this funky stuff." Status stacks, headers, and so on.
This is a kind of interesting one.
You can only use the body of a response once because these things are being streamed, so you could be, for instance, writing it to a database while also...
You can't write to a database and say, while you're also trying to display whatever is being responded or being sent back to you at the same time.
So I'll show you how you can overcome that limitation.
Here's like how you will make a photo for all four. So you have body, some of this HTML, Heading 1, and you will say, new response with the body. The status is for all four status decks.
Not found.
Pretty simple, huh? So these APIs are meant to be really simple. They're very, very simple primitives but extremely powerful.
Then we have the body mixed in.
So essentially some additional methods that we can call.
So you can ask for the body to be given to you as an array buffer, as a blob.
Where might you use a blob, for instance? - Binary data.
- Binary data, so like an image.
You can request the image, and, I don't know, shove iT into a canvass or something like that. Form data, and you can request just the JSON so it will automatically parse the JSON for you. If it craps out, the promise of a return will be rejected or you can just get plain text. So, again, a simple example where, remember how I talked about, so the request JSON, pulling JSON, will consume the body because we've changed it, we've touched it, we've gone in there and looked at it.
So, again, if we wanted to get a request.
This is a couple of adjusted requests.
So if we want to get the JSON but send the two multiple bits, we could perform it. So another object, I mentioned a couple of times, was the headers.
Setting custom headers.
Again, this is just a map.
So, again, we have, and a bunch of fancy iterators, just to get the keys and the values.
So a simple example here.
We set the header, whatevs.
Awesome.
And then we can check if request headers has whatevs, and we do, whatevs.
So what's the hardest problem in Computer Science? - Naming things.
- Naming things? And what else? Cache invalidations.
So what does the process now give you? A cache API that allows you to name things and do cache invalidation.
(laughter) So, first thing to notice about or the first thing at least that got me when I started working with the cache API, I made the stupid assumption that it would respect HTTP cache directives. Do not fall into this trap.
The cache API does not respect HTTP cache directives. Do not be fooled.
So you have to do all the cache validation yourself. Cache validation is really hard.
So you have to plan your strategy.
And like I said, naming things.
So who's seen like the number of different hacks that are being used to name things? So there's like hashing the file and then putting the hash of the name into the file name, so it's built to already do this. For me, that stuff is like kind of insanity. But, again, it's like trying to find, still, as a community, trying to find what the best way of managing cache and then validating these service worker caches.
We haven't come up with anything super useful. And another thing that you should know is just because your site is offline doesn't mean that the cache is not volatile.
So the browser can actually then, if there's a lot of pressure from other sites and your site hasn't been used in a while, it would just kick all the stuff out of the cache. It would just evict your cache.
So don't trust the browser.
Do not trust any of the browsers.
And don't think just because you put something in the cache that it's going to be there forever. It can be evicted from the cache.
So some people have tried to hack around that by using IDB and then putting a restful API in front of IDB, so you can basically do requests from service worker, patch them, and then reroute those requests with IDB and send back the data inside the database. So I'm not suggesting that as a good thing. So if you were to, in your developer tools, you can access the caches, and you can open a cache simply by providing the names.
See, the window cache is open and you just give it a name.
And that, so if you start, you use IDB.
So you've gone through the pain of hundreds of lines to set up just to create a freaking database.
No pain here.
So we have, open, and just the name.
You can delete, you can take, you can get the keys with the cache that you still have, and you go to match to check which cache has a particular request. Here's a very, very simple example.
We have a bunch of URLs.
We open cache and we add all of these URLs automatically. Who can see a problem with that, where you just take a bunch of random URLs and you throw them into a cache? What could go wrong there? So in the initial version, it would just blindly go and fetch stuff.
But if you had like a 500 error, they just throw it in.
Like 404, they just throw it in.
So then your users are using your site, and suddenly, something's going wrong, nobody got notified that there was an error. So we've fixed that now.
But that was a violation of the standard process to the Extensible Web Manifesto.
But we promised not to be clever, and we tried to be clever, and we screwed it up. So we've fixed that now.
But, again, I'm not a fan of add all, because you can just add individually and you can do the checks that you need to do. So, to view the caches in the browser, you can just open your deb tools.
All browsers don't obviously have service worker support, but in Firefox, if you go to the settings, there's a little storage thingy, a little checkbox, and that gives you a little tab.
And there, you can go to cache storage, and you can see what's inside there.
And Chrome also has the same thing.
They've just added in Canary this Applications Tab where you can basically control service workers, and they have cache storage there.
And they have some really nice features which I hope will also come through the Firefox tools, like you can delete directly from the cache from there.
So as you're developing, this is so useful. And you can see your caches and so on, so it's really nice.
Okay, so cache API, again, super simple.
Add, Delete, Add All.
Keys, Match, Put.
So the only really useful one is Put.
And then, like I said, you have to clone your request. Remember this.
I screwed this up lots of time.
You're going to have a bad time.
Clone.
And I'm quickly going to show you one example. So, earlier, Elias, Elise, sorry, was wondering from Tim's talk whether you could, for instance, pre-cache hazards with CSS service workers by inlining critical CSS.
So taking those critical...
So the inline CSS where they could address some of that with service workers, so we had a quick play-around.
And again, this isn't right.
This isn't a solution.
It just shows you how easy it is to take these primitives and I think we have less than 20 lines of code, you can already compose something. So you can create an HTML element, take the text, put it into HTML, and then we can basically create a response, and then put than in the cache. So one that I told you to play with, if you haven't played with, Notifications.
So, again, it's 10 lines of code to create a notification, so you can hopefully see that at the top there. Oh, hi.
So, Push, again, is, like I mentioned, it's a ton of work to set up.
What you can do in deb tools is emulate.
So, here in Firefox, we have this about debugging workers. So in this tab, you can basically...
Remember, I said the service worker runs and then shut down, so you can start it and then you can send a push.
Some segues, learn to love the specs.
Go and read the specs.
If you're interested in specs, go ahead and check out. If specs suck for you, like they're annoying to read, MDN, I'm sure you've all been to MDN.
So, again, it's community resource.
If you find something wrong, fix it.
Contribute.
It's for the good of everybody.
It's browser diagnostic.
If you're interested in, people of Melbourne, if you're interested in learning more about service workers, google Fort.js workshops meetups, and we do basically low-level js-only APIs, like twice a month or once a month.
If you're interested in learning more about service workers, we're going to be doing a hands-on service worker.
And go and make stuff.
Have fun.
(applause) - Thanks so much, Marcos.
So while Fiona sets up, question.
Only time for one.
We have Paul Mill running around.
Who's got a question for Marcos? Oh, Yoav.
- Oh, dear.
- Yeah, you're in trouble now.
So Yoav is speaking tomorrow.
He's also one of the coaches of the Web Incubator Community Group.
And it is an impromptu kind of community meeting happening right here.
- Hi.
You mentioned overriding.
Fetch basically can override the referrer header. Why, and isn't that a security-- - Yeah, like I mentioned, you wouldn't do it normally, but if you're tracking...
Where are you, Yoav? I can't see you.
- I'm here.
- Oh, there you are.
So, like I said, you could do it for analytics purposes within your own site.
- Not to an external site.
- No, not to an external site.
No, so we had, where this emerged from was in browsers previously, we had a bug where if you went to Bugzilla, we had a link to pull the profiles from Gravatar, and we were sending the referrer along with question mark person ID. So all those Ids were leaking out.
So because of the referrer header, so we had to shut down the referrer header and just turn it into an origin.
But there is the unsafe.
There is the option that if you wanted to see where people were linking from from within your own site, you could set that. - Okay, cool.
- All right.
Once again, thanks, Marcos, and come chat and ask more questions.
Thanks so much, Marcos.
- Thank you.