Fun with Sensors and Browser APIs for the web!

(upbeat music) - Thanks, that was very nice.

Thank you so much for choosing to come to my talk. I know that there is a bunch of other awesome ones to choose from but today you get to experience fun with sensors and browser API's.

As we just heard, my name is Mandy Michael, I am on Twitter, and I am on CodePen.

Sorry, that's the one that you want.

If you want to tweet at me, please feel free. If you have any questions, you can DM me or just tweet as normal.

Some of the demos are up on CodePen.

So if you wanna check those out, you can go and do that later.

So don't worry too much about the code bits 'cause you can always go and have a look at it. I do work for Seven West Media in Perth.

It's a news organisation for those that don't know, we build the westaustralian.com, perthnow.com and the Seven News website as well, along with a bunch of regional sites and editorial tools, not gonna be talking about the news.

But I am passionate about storytelling and how people tell stories online.

So that will come through in the talk today. Most importantly, though, this is my dog.

And the reason that I mentioned this is because in my demos, I tend to use the word Jello a lot.

And people see it and they think I'm talking about like jelly, and it confuses them.

So when you see the word Jello, you know that I'm talking about my dog and not about the food, which will make sense later when I say some things that require the word Jello.

So like I said, we're gonna be talking about sensor and browser API's.

Specifically, we're gonna talk about three, the speech recognition API, device orientation API and the ambient light sensor. Now, before we get into the talk, I want to preface this with something very important. This is called fun with browser and sense of API's. So I haven't built anything particularly serious with it. However, don't let that put you off.

I will talk about practical applications of these API's as we go along.

It's just, I like to tinker with things.

So all of this stuff I make is like fun and silly. That's how I learned so that's what we're gonna get today.

I also see this as a really good opportunity for us to think more creatively and experiment and figure out how we wanna use these API's and how we wanna use I guess any tech in the future. Because it they kind of new and some of them are a little bit experimental. So it's important to start thinking about how these technologies can be applied in both fun and practical ways.

So like I said, they are experimental, some of these things will work in a lot of places and some of them in a bit less, so (coughs) sorry (clears throat).

So I just say remember, everything started out as experimental technology.

It's really important that we know how to use this stuff as we move forward, because otherwise, it'll be here and nobody will know what they're doing. So, one last important piece of housekeeping, I am not going to specifically talk about every single browser and its support in each section instead, this is my legend. A full logo means that it's got full support. A flag over the logo means you have to enable a flag in the browser for it to work.

And a star means that it is coming soon.

So usually, that means it's either in Canary or it's in Firefox Nightly, for example.

So with that in mind, let's talk about the Web Speech API's.

There are a couple of sneaky jellos throughout this talk if you want to keep an eye out. So the Web Speech API is basically it enables you to incorporate voice data into your websites and web apps. There are two interfaces in the Web Speech API, speech recognition, which basically understands human voice and turns it into text.

And then speech synthesis, which reads out text in a computer generated voice. For the purposes of today, we're going to focus solely on speech recognition. If you do wanna check out speech synthesis, you can go and have a look at the MDN docs. They're really good.

There are a couple of things that I'll touch on that is applicable to both speech recognition and speech synthesis.

So you get part of the way to hear anyway.

So, we're gonna try and do a demo straight off the bat here, everyone.

Where's my mouse? There it is.

Okay, hopefully this works.

I'm just gonna say some things and then hopefully it will work.

Yeehh, (audience claps) okay.

Success, one down several to go.

So what this was, was speech recognition working in Chrome.

The reason that I was worried is that it needs the internet to work this does not work offline, you do need Wi-Fi, so few.

And there are two ways that speech recognition can work, you can either use the interim results, which is what I'm doing here, and that's why as I was speaking, it was kind of building up the text on screen. And you might have noticed that some of the words were different and then the further along in the sentence it got it changed them because it had more confidence in what I was saying it had a bit more context around the paragraph.

The other option is to wait for me to finish speaking and then display the text on screen.

So most of the examples that I'm gonna show you from now on will be waiting to hear for something and then display it on screen or do a thing. But I really like this example because you kind of get a feel for how speech recognition is actually working in real time.

So we can go straight into some code.

Before I show you some applications.

To get started, we need a couple of things. First, we need the speech recognition constructor. And that's going to create a new speech recognition object instance.

What that's going to do is give us access to the API's methods and properties and there is quite a few of them.

So I'm just gonna focus on the real basic ones to get you started.

So first up, we need the start and stop.

(clears throat) Sorry, I've got a bit of a cough at the moment, the start and stop methods.

So they pretty much do what you would expect start, starts the speech recognition service listening to incoming audio, and stop, stops the speech recognition service from listening to incoming audio.

What stop also does is attempts to return a result from the audio captured as well.

So it's slightly different, it does play an important role in retaining the values. So you definitely don't wanna miss that.

The next thing we need is a couple of event handlers. So there are quite a few of these, most of them just listen to changes in recognition status. The first one is on result, and that will run when the service returns a result. This is really special, because it's executed every time someone speaks a word or several words in quick succession.

And then the other one is speech end.

And this will run when the speech recognition has stopped being detected and this basically means when he is opposed for certain amount of time. The final thing we need and probably the most complicated thing I'm gonna go through today is getting the result. I'm gonna go through this, and then I'm gonna explain why it's like this. So what we need to do is access a speech recognition results list object.

And this basically contains all of the information about the result, then we need to get the first result objects.

And that contains an item code speech recognition alternative object.

And that contains what we actually want, which is the transcript.

And that's basically what it thought it heard from the microphone.

This is very convoluted.

The reason that it's like this is that you can actually have multiple alternative objects returned from the result. By default, it's one, which is why you go through these steps.

I actually haven't found a use case for more than one. I'm not really sure when you would need this, I guess if you're doing something really complicated and fancy, but typically you just need to do this, which is frustrating from an API perspective, 'cause it would be nice if we could just go event.transcript or something but sadly, that doesn't really work. So in summary, we get the least object, we access the first result, we get the first alternative and then we access the transcript.

And if we put all these things together into a function, and we have something like this, so, first, what we need to do is check for speech recognition and webkit speech recognition.

So it works in both of our browsers, or most of our others, I should say.

Then we've got our function, we start out speech recognition.

I'm looking for a lack of paragraph on the page to whack my text into we start the process, and then on result, we're going to get the transcript and loaded in the page.

The other thing that's really important for this, is the onclick function down the bottom of the page. You do need to trigger this with a click or interaction of some kind because it uses a microphone. Sorry for security reasons.

The user has to say yes, you can listen to what I'm saying. Otherwise, it's not gonna work.

So at this point, I'm gonna show you some other demos.

I'm gonna use Firefox Nightly.

No, I don't do that.

No, restarting that would be bad.

Sorry, this is marshmallowy.

He's a dragon.

He breathes fire, obviously, 'cause he's a dragon. And what we're going to do is we're gonna say a word, and hopefully, he is going to breathe fire with that. I have had a bit of trouble with this one and I think it's the acoustics in the room. So we'll fingers crossed it works.

Fire.

Aaahh dam it.

Let's try again come on marshmallowy don't let me down. Fire.

Yay.

(audience claps) Cool, cool, cool.

So what is fun about this? And what I like about it is that.

So for starters, for those that don't know me, I love variable fonts.

So these uses a variable font to make animated by text, check out variable fonts, they're amazing.

But what I also think is really cool about this is that, whether you're making games or branding sites, or things for advertising, you can start to create these kind of interactive experiences that take something that, the user is providing and represent that on the page. And you can do sorry, in really interesting and creative ways.

What is that next step? I'm gonna jump over into Chrome 'cause I like to be fair switch between my browser Very important, all browsers are good.

Much like dogs.

So this one, I'm gonna do these demo and then I'm gonna explain what's going on, 'cause it's a little bit of a failed experiment, but I want to talk about it anyway, because of it being a failure.

I love my fluffy puppy Jello.

Okay, so you might be wondering, why is this failed experiment, Mandy.

So what I was trying to do with this, was use a variable fun that had a width axis so I could change the width of the text.

And I was trying to match it to the volume of my input. Sorry that depending on how loud I was, it would depend on how heavy the text was.

And it actually didn't work too badly this time. So you can see that my fluffy is heavier than poppy. And that's because the input that it got on poppy was probably slightly quieter than what it had on my end fluffy.

Unfortunately, linking speech recognition and the Web Audio API is not very easy.

When it comes to detecting speech, there's no timestamps, it changes consistently as it's detecting.

So matching the things together can be quite difficult. I haven't found an effective way to do this. So this is really a hacky.

I do kind of love this as a concept.

If you think about speech to text, interfaces at the moment when you speak.

And it turns it into text, you lose all of the meaning and the tone and the emotion behind what someone is saying. And I love the idea that hopefully in the future, we can use variable fonts and other CSS features with audio and speech recognition to represent these texts interfaces in more meaningful ways. One thing that you can do effectively is detect for certain words.

So this is kind of like how Facebook detects for things like congratulations when you, say happy birthday to someone or whatever.

So what I've done here is I've looked for the word love and the word Jello, and I've styled them differently in my CSS. So while we can't automatically detect volume or pitch or anything like that with Web Audio at the moment, we can look for specific words.

And I think in the future, we have the opportunity to use things like machine learning, combined with speech recognition, to try and represent what we actually want to express in these interfaces in a much more effective way. And this is important for things like accessibility, as well as just general usage.

Because not everyone interacts with a page the same way that we do not everyone has a mouse or a keyboard. And I think allowing people to converse in a more emotionally represented way is really valuable. And speaking of interacting with websites.

Let's go back to Nightly.

And look at this demo.

This is mellow.

You may have noticed a slight theme in my naming. It may all rhyme with Jello.

That might may or may not be intentional.

So I love dogs in case you didn't already know. And what I wanted to do was make something that listened to commands and I thought what listens to commands? I know dogs do.

So what we're gonna do is say some things to mellow and see if he listens to me unlike my real dog. Where's the bowl? Aahh dem it.

(audience laughs) Let's try again? Sorry, disobedient.

Where's the bowl? (mumbles) Not, saying it.

See, this is the thing about speech recognition, right? I was talking to someone earlier, Australian accents. Not super great.

It's really made for Americans, I'll try again. Where's the bowl? Yes I guess it worked this time.

So what I'm doing is detecting for a specific phrase. Now what I could do is have variations on this. So because I say the word slightly differently and interprets my accent slightly differently, I could have that phrasing multiple times and detect for something that's close enough and then I could make him do a thing.

And you're probably wondering, why didn't you make him chase it? Because I am a developer, not a designer and I had to work with the SPGs that I had on hand. But if anyone can make a running dog SPGs please hit me up that'd be amazing.

So he was a very good boy.

So let's try telling him he's a good boy.

Who's a good boy? (audience claps and laughs) Thank you mellow appreciate that.

These are all my backup videos in case the internet didn't work.

So there's like a few kinds of things that you can do with this and with the...

I should go back to mellow, with this kind of stuff, right? This is a silly, fun interactive thing.

This could be used in a game or puzzles or anything like that.

But if you think about this, it more broadly, there are a lot of interfaces where being able to speak to it to do things would be kind of nifty.

I at home, watch a lot of Netflix on my computer. And I spend so much time watching Netflix that little pop up thing comes up and it's like, would you like to continue? It'd be great if I could just go.

Yes, of course I do.

I know, I've been here for eight hours but that was a really good bit, instead of like having to find my keyboard to go like, oh, yeah, okay.

Thanks Netflix for caring about my health and well being. So you can also pause things if you're moving around the house, I do a lot of ironing while I watch anime, so it's really important that I can not have to use my hands.

Whether it's the designer of my work, really wanted to create a search interface that use speech. So you could just yell at the news and it would give you back what you wanted, rather than us giving it to you.

And I thought that was a really cool idea.

So it's lots of different ways that you can use voice to interact with your websites.

It doesn't just have to be for fun things like games, it can be a really useful interface, controlling experience for people.

Sorry, I wanted to mention Andre.

He's a speech and AI engineer at Mozilla.

I was talking to him recently, I met him at a conference.

And he happened to be talking about the Web Speech API.

And we had a really good chat.

And he shared this wiki page that he put together about speech recognition in Firefox.

And it's really, really good rates so I highly recommend you check it out.

I will post the links later.

So don't worry too much if you miss it.

If you tend you want to use this, and you have questions.

He's also totally open to people chatting to him about it. So I would absolutely hit him up.

But there is one thing that he talked to me about that I wanted to share with everyone.

So the way that the Web Speech API works at the moment in Chrome is that it goes from the front end all the way up to Google, with this speech to text provider which Google provides. Google records this and keeps it and uses it to improve their speech to text algorithm. So when you use Chrome with speech recognition, it will have that data that you've provided it and will know that it's come from you.

In Firefox, they recognised that Google speech to text provider is much better than this.

So at the moment, they're using Google by default. The difference with Firefox versus Google is that, in Firefox, the Web Speech API, goes to the back end on Mozilla's AWS premises where they've got the S3 bucket, and you hit their speech proxy from here.

It takes like a timestamp and the information from the API that you've requested. And it uses Mozilla's IP address to send that to Google. They pay extra so that Google doesn't keep it. You have to pay extra for that.

Thanks Google that's awesome.

But that will say not to Google and then return the result. So here if you use Firefox Google doesn't know who you are, they don't have your IP address, and they don't have the data that you've sent either. So if you're worried about privacy, but you can use Firefox Nightly at the moment with this, currently, it's behind a flag, but very, very, very imminently that flag will be removed. It was supposed to be this week, but I didn't quite make it sadly.

But it will be happening very soon.

And alternative is using Mozilla's deep Speech API, sorry, speech to text provider.

Deep speech is their own version.

It's not as good as Google admittedly, the next version will be better and has multiple language support, not as much as Google's does, but it's respectable. And URL for that wiki page actually tells you how you can change it to point to date speech instead of Google service as well.

So one more thing that I wanted to mention, before we move on is that, while you need the internet for both Chrome and Firefox at the moment, one of the things that Mozilla is looking into doing is allowing people in future to download the language packs so that you can use it offline as well.

It's not available at the moment, but it is a feature that they really want to provide. So that you can use this stuff offline or in environments where perhaps you don't have internet as freely, maybe like at conferences when you're doing demos.

So with that, I want to finish up with speech and just say that audio and speech, it allows us to create different kinds of interactions on our pages and potentially create more accessible experiences for everybody.

It allows us to provide different kinds of input to control our interfaces and maybe be a part of the story or the experience that we creating as well.

So with that, I want to move on to orientation senses. How many people her is familiar with the device orientation API? Okay, cool so a few people.

It's actually been around for quite some time. So the supports pretty good caveat, iOS Safari, on iOS 13 put this behind an interaction. So it used to just work and then people were doing rubbish things with it and being a pain, mostly ad tech, and Apple went, nope, you can't have that anymore.

So now you have to click in order to make this work on iOS 13 which is a shame, but it still works. It obviously depends on if your device has an accelerometer or gyroscope and that is present in a lot of different devices.

Mobile phones are really good example.

And all it does is determine the orientation of the device. So the most common use case that you're probably familiar with is when your screen rotates to the correct way up. I hate that feature but it is a use of the device orientation accelerometer gyroscope.

So in order to use this, we're gonna need a few things.

So first, we need the device orientation event. And what this does is provide information on the physical orientation of the device that we're using. And then we can access three properties attached to these event, there's actually four but I'm only gonna talk about three today just to simplify things.

So they're known as alpha, beta, and gamma. Really weird names but that's what we've got. So that's unfortunate.

Alpha, beta and gamma represent different numbers depending on the device orientation.

So this is alpha and basically, it's the direction of the device is facing, and it's kind of like rotation on a flat surface. So it's like going like this.

I don't know how best to explain that, it's kind of like having a level and then rotating it around and that goes from zero to 360. And then we have beta, which is the front to back motion. So it's like going like this with your phone and that ranges from minus 180 to 180.

It's my catacon.

'Cause why not.

And then there's gamma, which is left to right. So it's like tooting the phone left to right. And that ranges from minus 90 to 90.

I mostly use gamma in my demos.

As far as cord goes, what we need to do is to check if device orientation event exists, just to make sure that we have access to it before we try and do anything.

And then once we've got that, we can add an event listener to the window and pass a function in that's going to execute some code. And it's really, really straightforward to us. So we can just check for the gamma event value and you can either take a constant reading, or you can check for it whether it's below or above a certain range, and then you can do something.

So this example here, this uses a constant reading to affect the slant of the text.

Again, this is a variable fun.

And then once it reaches minus 50, so which was actually, this exact code that I used to make it, it will slam into the side of my phone.

I don't know if anyone's noticed but my phone is cracked in this picture.

It was not intentional.

I wasn't making a joke, but every time I see this now it's like a reminder how clumsy I am 'cause it's way worse now than it is there.

(audience laughs) Sad.

You can do other things.

Sorry about the blurriness of this I'm not very good at filming things.

I really liked this example 'cause it reminds me of a storybook, and I just kind of love the idea that you could have like an interactive page and like someone could be playing around with it like a kid or something.

And it's like moving and they like controlling it and part of the story.

Or you could do something like this.

I like to make text effects.

And this text effect was one of the first ones, I made a split vertical layer.

And I thought, well, I could apply this to device orientation.

And I could move the background position of my gradient, depending on the device orientation.

So what this does is it passes the gamma value through and then changes the percentage of my background gradient and moves the gradient around as I rotate the device. I think this is a really nice one, because it's kind of simple, but it's something that I absolutely would have used when I worked in advertising on some of the branding executions that we did to try and create something that was maybe a little bit more interesting and engaging for our clients.

And then this one is a new one which I made about a month ago and what it does is as it rotates, it increases the weight and adds a transform to each individual letter.

Now, this will work with any number of letters. It's not hard coded, codes up on CodePen if you wanna have a play.

But I really liked this one, it was one I already always wanted to make. I think it's really fun.

And I kind of like, I like to see this on... like conference websites.

When I made it, I was like this belongs on a conference site, front end conference website to be specific 'cause I just feel like it fits.

So, orientation in motion.

It's used commonly in games and augmented reality. But it's also used on the web for updating map information like turn by turn navigation or subtle UI effects. So that's where you would most commonly see it. Apple does this a lot.

With parallax effects and excuse me, while people are making more use of this, for the subtle UI effects, what I think is really exciting is that it offers us the opportunity to add in, like additional dimension to storytelling experiences. Right now, a lot of what we create is what I would deem as fairly static.

So aside from basic things like hovers or predetermine animations, a lot of stuff that we do is set up before people get to our websites.

There's no way for them to change the experience that they're having based on their input in the situation that they enter that immediate time. Things like speech recognition, device orientation and ambient light allow us to create experiences that are more interactive, and more specific to the user in this specific moment. And that allows them to be a part of the story as well. And to me, that's really amazing, because as someone who works in the news sometimes there's these really engaging, compelling stories. And if you wanna build like a custom website for that, you can do that, and you can determine how they experience it. Or you can provide sensors and inputs like these, to allow them to shape that experience that they have as well.

So on that note, let's move on to the ambient light sensor, just to like slowly chill out to the end of my talk.

So, an ambient light sensor is a photo detector that is used to sense the amount of ambient light that's present in the room.

This part of my talk is always fun for me because my demos change depending on the light present in the room, so it's always a little bit of a risk. I did say I wasn't gonna talk about browser support, but I am gonna mention it about this one specifically because there's a lot going on with those icons. So technically, according to the spec it's supported in Firefox, behind the flag, I have found this not to be the case.

There were some security issues flagged around this, which I'll talk about in a second.

So I think what has happened is Mozilla has blocked it from working while those security issues have worked on, I did try and confirm this before I came here, but I couldn't get a response. And then with Chrome is currently behind a flag, however, much like speech recognition in Firefox very, very, very soon, it will be released in Chrome fully without the flag.

They have expressed it intend to ship after fixing the security issues.

So this will be landing in Chrome without flags very, very soon.

Where you might see this kind of ambient light feature working and use most frequently is on your phone, where it dims if you're in a low light environment to save your eyes from being like, destroyed. So, that's a really practical use case that you get on your actual devices.

So before I get into some examples, I wanted to tell you a couple of things about the ambient light sensor, which I thought was really interesting that I got from Riju.

This is my buddy.

He is the tech lead at Intel working on the web platform. But he's also at the chromium code owner for modules like sensors and NFC.

So he's been working on the ambient light sensor and device orientation coincidentally, this is his Twitter. If you have any questions, you can go ahead and hit him up he's super nice and really informative.

But one of the things that he told me about I found really interesting because when I first started looking into this, I hit up against the problem and that was that when you search for ambient light on the docks it comes back with two API's.

One of them is deprecated and will not work so you don't want that.

So this is the second attempt at the ambient light sensor in the browser. The first was in Firefox and this was before Chrome started doing anything with it. And Firefox did not like how the API was designed, they had some problems with it.

So, it triggered a redesign of the API.

The spec writer is took this opportunity to broaden the scope of the API to consider things like Node.js as well.

So it allowed the API not just to reach browsers, but other JavaScript runtimes in context, like IoT and wearables.

So that means that the shape of the ambient light sensor API is the same for different devices.

So for example, when Fitbit created their sensors, they use the same shape in their SDK.

So for like device orientation and stuff as well. So it's kind of interesting to me that stuff that we're building for the web is also being used for other technology, like IoT and wearables. And that's really exciting from my perspective because it means that we can apply our knowledge to other experiences.

So the second thing that I wanted to address, which I briefly touched on before, was the security concerns.

And the reason that I want to mention this is because every single time I post a demo, somebody DM me, or drops a comment on my Twitter going, "Oh, but what about the security concerns?" I'm like, Dude, it's a fun demo tool.

(audience laughs) So I wanted to address it to hopefully avoid that conversation coming up again and me having to experience that it's a legit concern. Top privacy security experts flag that the ambient light sensor actually can allow people to steal your data, which is, in my opinion, really impressive, because most people think that ambient light is using the camera, but that's actually not the case. So what they did Chrome, reason why it's still behind a flag is because of the security issues.

Chrome went away and was like, okay, we'll fix this and they worked with people to mitigate the known issues, I emphasised known, because I feel like if someone wants to steal your data, they'll figure out a way.

The main solution that they implemented was to add a frequency cap.

So at the moment, when I first created all of the demos I'm gonna show you, you would get a constant reading from the API browser.

It's not capped at 10 hertz and readouts rounded to the nearest 50 lux.

What this did was removed the accuracy, which means it's now very difficult, or impossible, I hope to steal any data from using the ambient light sensor.

It might sound a little bit limiting because you're not getting like the accuracy that you would expect but I actually found zero impact to any of my demos.

I have not needed that kind of accuracy for anything that I've made.

If you do plan on using it and you need accuracy, you might hit up against that problem though. So before I jump into demo, I'm quickly gonna show you how we use the code The ambient light sensor is my favourite and in my opinion the easiest to use.

And it has an added bonus, which I'll get to in a sec. So, like device orientation API, we need to create a new instance that's going to give us access to the sensor. We also need an on reading event handler.

This is not specifically part of the ambient light sensor API.

It's actually part of the sensor interface, which consists of a bunch of different properties and event handlers that a bunch of different sensors can use.

The benefit of this is once you know how to use them, and you're familiar with them it means that you'll be familiar with how to use a heap of other senses that are available to us on the web as well. So onreading is an event handler is called whenever a reading is taken, which I guess is in the name, you can determine the reading frequency by default. It's like every pretty regularly like a constant stream but you can pass an optional value to the census constructor and that will determine the number of readings per second, we'll just use the default for today.

The last thing you need is the illuminance property. This is the only property attached to the ambient light sensor and it returns the light level as a lux value, you don't really need to know that it's a lux, it just returns a number but that's the unit of measurement for lux. So we can check that in some JavaScript, we create a sensor we have onreading, and then depends on how you want to use it. You can either take a constant reading or you can detect for if it's dark or light at a certain frequency, all my demos, keep it pretty low so that you can see the extremes. And then you can change stuff on the page accordingly. So let me jump over into Chrome on is my speech recognition demo.

I do love my dog.

We'll just stop that from listening should have put a stop button on there.

So this is my moon.

What should we call him? We'll call him Mello too 'cause it's like Mello moon. I got my torch on my phone.

Just open my thingy here that.

And then I put my thing up to the light.

And depending on oh, thank you, depending on the light in the room will depend on what experience you get.

And this is really cool from my perspective 'cause I love the idea that somebody goes to my website and they're looking around and they're like, this is cool. And then later they come back and it's dark. And they're like, oh, this is a different website. This is so amazing.

It's like a bit delightful and there's kind of like a journey there, which I think is really, really nice.

But if this is not practical enough for you. (audience laughs) I'll give you another example.

Now you probably gonna be like "Mandy this is impractical," but I'll get that variable fonts, person.

This one detects the light in the room.

Oh hang on what am I doing? And depending on the light will change the weight of the text.

So the reason this is cool, right? I like this font for this demo, because it's like really extreme and I think it demos it really awesomely.

But I should pay attention to what I'm doing 'cause that was basically what this means is that when you creating interfaces, you can change the weight or the width, or other axes of a font, so that the text on your page responds to the environment that user is in.

So if you're in a low light environment and you have a critical interface that someone's walking around with on their tablet and they're going from like a dark room to outside, you could change the weight or things like that so that it's easier for them to rate.

It also means that if somebody is reading your website in bed at night, in the dark because somebody is asleep next to them, you can adjust that accordingly as well.

So that means that we can start to create more accessible and more usable interfaces to make our text more readable and more legible.

And we can do this, now I mean, granted, it's only gonna work if you've got the flag enabled. But once Chrome enables that flag, this will work for a lot of people out of the box. And I think this is a really great way to start customising our interfaces to our users, unique experience.

But, I mean, this is me, so I like to make fun demos.

This font is called Bloom GX.

It's by Typearture an author who makes this font makes the most incredible variable fonts you have ever seen in your life.

You should go and check it out on Twitter, but this one blooms.

What am I doing? Here we go.

Stand in front of the thing, Mandy.

So depending on the amount of light in the room will depend on how bloomy this font is.

And, again, I like these from a storytelling perspective, because I really want to build a website where you're telling a story and like you're in a forest, and you've got all these beautiful graphics and stuff. And it's like oh really nice in the day and there's birds and water running and it's beautiful, right? And then you go back at night and it's dark. And there's like creepy awl eyes there and maybe like some phone startup hearing and maybe like, a wolf house in the background or something. Like I just love this idea that we can create different experiences and change the way that people experience the stuff that we put online based on this specific experience right now. Let's see where we're at, okay, cool.

So I guess before we finish up and before I show you my combination demos, what want to say is don't be limited by the things that we can already do.

If we focused on the things that you can do right now, we would never have innovation or any new and exciting and interesting things, it's really important to remember that the web is still a really, really young place. We're not very mature.

And I don't mean that as a criticism.

I just mean it's like what 20, 30 years old, like that's nothing.

There's so much for us to do.

And there's so much for us to create that this is an opportunity for us to experiment with these interfaces and these technologies and the stuff that you're gonna hear about for the rest of the day and make interesting, useful and cool things, specifically fun things. So this one uses the ambient light sensor and device orientation to make a secret message. I'm not gonna tell you what it is you had to have paid attention.

But, this is the message I gave my husband one day. This one is a very popular with primary school girls, they like to change the text and then pass it to their friends and then like, reveal a message.

And at the time, I didn't think that they would like it but in hindsight, I kind of had flashbacks to when I was in primary school with my friends.

I was like, yeah, I totally made that for like eight year old Mandy. (audience laughs) So let's jump over into Chrome.

So this is Smello.

(audience laughs) He is a grumpy wizard.

He will already do magic if it's dark enough, if it's not dark enough, he's like, no, it's not enough.

I'm not gonna do my magic, 'cause he's a grumpy wizard.

So luckily for us, it is dark enough in here. Thank you lighting people.

I'm gonna try and beg him to do a spell and I guess we'll see if it works.

This is the other one I've had a bit of trouble with so let's give it a whirl.

Please cast your spell.

Ahh not, it's not working.

Let's try again.

I had a bit of trouble with this one not really sure why it's breaking.

Please cast your spell.

No, ahh is only getting the last bit.

I'll try one more time and if it doesn't work, we'll use go to mode and you can see the effect anyway just for funsies.

Please cast your spell.

No, well anyway, if it worked, it would have done this, which would have been cool, actually do you know what? He's one I prepared earlier.

This uses a Harry Potter spell instead 'cause why not? So this is my Batman torch and we'll just use the Harry Potter spell for making stuff appear in the sky flag, right. Yaay, thanks Smello you're proving my spell. (audience claps) So this last one before I finish up 'cause I'm running out of time.

This one uses device orientation, ambient light and speech recognition.

But demoing that would be super duper difficult. So, what we're gonna do is show you the device orientation bit and then I'm gonna show you the other stuff in the browser so that I can prove to you that it works.

So this is my UFO.

And as you rotate it, it'll move.

It does need to exist over the cat in order for this, it's a Capricorn again, I snuck came in again, it doesn't need to be over the cat in order for this demo to work.

So let's hopefully see if it works.

We'll just stop that one from doing it.

So, what this requires is that there's enough light for the beam to appear.

Cool.

Now, I am working on making this work with keyboard controls as well, so that it can work everywhere and you don't need to have a device orientation but I haven't got around to doing that yet. So we're just going to pretend that I am awesome and got this with my device orientation or even my cap. This is the real tricky part 'cause now I have to speak and hold the light there at the same time while looking at that monitor.

So fingers crossed, it works commence abduction.

(audience laughs and claps) So that went really well.

So I'm gonna stop my demo there but I will say if it doesn't work and like you don't have the beam over the cat, it calls you a fool.

And says, fool you missed.

So, again, like this is a really, really fun game example. But I feel like there are a lot of ways that we can combine these interfaces to do different things and allow people to input and control our web experiences in our apps in a bunch of different ways.

So I hope you enjoyed abducting a cat in the web, this one is presently not available to play with 'cause the code is a mess, and I was really embarrassed by it.

But maybe if you come and see me, if you want to have a look at it, we can go through it. So on that note, thank you very much.

My name is Mandy.

These are the URLs I will also tweet out my slides and a bunch of resources you can check out and some of the demos are up on CodePen @mandymichael if you want to have a look.

Thanks very much.

(audience claps) (upbeat music)