Machine learning for front-end developers

Machine learning can have some pretty complicated concepts to grasp if you’re not a data scientist. However, recent developments in tooling make it more and more accessible for developers and people with little or no experience.

One of these advancements is the ability to now train and run machine learning algorithms and models in the browser, opening this world to front-end developers to learn and experiment.

In this presentation, we will talk about the different applications, possibilities, tools and resources, as well as show a few examples and demos, so you can get started building your own experiments using machine learning in JavaScript.

Charlie Gerard

Charlie points out she is not an expert in ML, but to remember that if she can learn enough to demo just tinkering in her own time… you can learn it too!

What is ML? Using statistical techniques to train code. For example a spam filter in the old style would be an epic IF statement; but a ML solution takes a model trained with example data and tests new data against it.

Types of algorithms:

  • Supervised learning
  • Unsuperised learning
  • Reinforcement learning

Supervised: create a predictive model based on a set of features and labels (inputs and outputs).

  • Labels – the name of how you would classify each entry (‘dog’, ‘cat’)
  • Features – characteristics of the entries in your datast (‘dogs are more friendly’, ‘cats have whiskers’)
  • Predictive model – mathematical representation of of the outcome of your training (how would you determine if the new entry is a dog or a cat?)

Basically it’s maths trying to understand the data you are giving it.

Example: predicting the price of a house. You can feed data like size, features, location into the model.

Classification vs regression – classification is dog vs cat, regression is a continuous response like the price of a house.

Unsupervised learning just uses characteristics.

(I was interrupted for a moment here so the notes are incomplete, sorry!)

Neural network – Charlie provided a link to a video, it’s a topic to itself.

Choosing the right algorithm:

  • What are you trying to do?
  • What is your dataset made of?

ML is more of an art than a science. You will really never stop at your first prediction, you will need to test and refine… it could be your data, it could be your params, it could be the wrong algorithm.

Applications of ML

  • Home automation – example of a system that watches a person closing their book; and turns out the light when they do
  • Art – we recently had an artwork created by a neural network sold for $400k, using code the sellers didn’t write… and it challenged the definition of art!
  • OCR – there are datasets available online. This is often how people get started, because you can be so certain of the results.
  • Accessibility
    • if speech interfaces are the future, what about people who can’t speak? People have created solutions to have devices like Alexa understand sign language.
    • Sarah Drasner has created a cool demo of a system to autogenerate ALT text.

So why do all this in JavaScript? Data scientists will object that there are better languages!

  • easier learning curve makes it more accessible to more people
  • big ecosystem, lots of modules you can use
  • why not?!

What you can do in the browser:

  • Import an existing pre-trained model
  • Retrain an imported model (transfer learning)
  • Define, train and run models entirely in browser

Tools – tensorflow.js, keras.js, ML5, MagentaJS (music)…

Demos!

The Willy Detector
Charlie had to create something that allowed people to draw on a canvas, which meant she needed to stop people drawing willies. She had to train it with hand-drawn willies, and also hand-drawn things that were not willies (from Google Quickdraw doodles). The model only has 40 examples of willies so it’s not very accurate. Then the model needed to be trained at different sizes; and you need 80% for training and 20% of your dataset for testing. Process – CNN useing Keras… training, then paramater tuning, prediction. The code in JS is surprisingly short.

Teachable Keyboard
Charlie was able to quickly train the camera to recognise left, right and enter based on the position of her head in the field of view; which meant she could type on a keyboard just by moving her head.
This one works by importing a pre-trained module (mobilenet), uses KNN classifer algorithm, define the number of classes (how many gestures), feed the examples into the module, then you can run predict which returns left/right/enter/neutral.

Face Pong
(playing pong by jumping up and down in front of the webcam) Taken directly from ML5 example using mobilenet.

Limits

  • you need a lot of data, unless you have a big pre-trained model
  • takes a lot of time to train your own model
  • think about the mobile experience – this code will chug on lower-powered processors
  • liability – some models are black boxes, you don’t really know what’s happening and how they work. So if you get sued over it you won’t even know how or why it did what it did.
  • bias and ethics – algorithms aren’t biased, we are. If you feed sexist data into an algorithm, you’ll get sexist results. A simple example is a translation tool which assigned gender-stereotyped jobs to men and women, and negative emotions to women rather than men.

How to get started

  • Start with ML5.js as they explain things so well
  • Start small
  • Follow a basic example – MNIST handwritten character recognition
  • try a few different classification algorithms
  • try to replicate a project – doesn’t matter if it has been done before, just do it to learn

Tips

  • Avoid training your model in the browser unless you have a really critical reason to do so
  • Retrain your model regularly (or your predictions will become wrong)
  • Be careful of overfitting (this is when you model gets too used to the training set – if the accuracy hits 1 it is probably wrong)
  • Think about mobile
  • Check quality of training data (a lot of bad data will create a bad result – beware the quality of open source content, if you don’t read and clean your data you will get crazy results)

Ideas

  • games
  • accessibility tools
  • design tools (font pairing, colours…)
  • OCR
  • sentiment recognition (eg. filtering nasty comments)
  • fraud detection
  • …so much more!

@devdevcharlie

(upbeat electronic music) (applause) – Cool.

Thanks, thanks Chris.

Yeah, so thanks everybody for being here this morning, and for watching my talk around machine learning for front-end developers.

So on with that, I just got introduced so I’m not gonna spend too much time on that, but yeah, so I’m basically just a developer, but what I should mention is that I am not an expert in machine learning.

So I know that this is probably a bit of a bad thing to say at the start of a talk about machine learning, but it’s good for two reasons.

I have built demos on purpose for this talk, so it means that if I can do it outside of my work at home just by prototyping stuff, it means that you can too. And it also means that I have made sure, I have tried to make sure, that the way I explain things would make sense to everybody and that you don’t have to be a machine learning expert or a data scientist to understand how it works and how to get started. So the slides are online already if you can’t see anything, I tried to make sure that the colours would be all right for everybody.

But also I’ll share them later on Twitter, ’cause I haven’t really started, so you don’t know if there’s anything interesting for you in there, but the link is there.

So, I don’t really know how much you know about machine learning, so if some of you know a little bit already, you might be a bit bored in that first section of the talk, but to make sure that everybody understands the rest of the talk, I needed to make sure that we were on the same page.

So what is machine learning? If you kind of Google around, you’ll find a basic definition that is using statistical techniques to give computers the ability to learn from data without being explicitly programmed.

So a definition is cool, but I learn better, or I understand better, with examples.

So I have just a bit of like, pseudo-code of example. If you want to recognise spam emails about Viagra, for example.

So if you have a traditional way of programming, you might have on the left here either a massive if statement, or a switch case, where you have like a file with a lot of entries about words that you don’t want to receive email about, and you could check, every time you get an email, you could check in that list and be like, does it have the word Viagra or is it with an exclamation mark, or something.

The thing is, when people wanna really spam you, they get really creative, and they write words in a lot of different ways. So it means that all the time you would have to have maybe somebody dedicated to thinking about ways of writing Viagra to add it to a spreadsheet. Like that would take forever, and it’s not really a job that anybody would want to have.

But on the right, this is pseudo-code about how you could use machine learning for that. So you would import a model that you would have trained before with examples of a lot of spam emails. And you would load that model, give it an email it’s never seen before, and it would understand the patterns of how spam emails are usually created, and it would be able to predict if an email is spam or not. So this is obviously not real code, but just to understand a little bit about how it works, let’s just get started with an example.

So, when you want to build a model, you’re gonna need algorithms.

And there’s a lot of different types of algorithms. And to start very, let’s say the, like, top level. So you have the different types of machine learning algorithms.

You have supervised learning, unsupervised learning, and reinforcement learning.

Reinforcement learning is also called semi-supervised learning, but in general, you won’t really have to use this one for most of the problems that you would encounter, either with stuff that you wanna build on your own time, or at work.

And also, I wouldn’t really know how to explain it properly, so we’re just gonna skip it today, and we’re gonna focus a bit more on the ones that I can understand, probably. So supervised learning and unsupervised learning. Honestly, you don’t want me to start going into explaining things I don’t really understand, ’cause then I mix myself, so it’s just like, no. So supervised and unsupervised.

So if we start with supervised, so, we create a predictive model based on a set of features and labels, inputs and outputs. So there’s, it’s a very brief definition, but there’s a few problems here, especially if you’ve never done anything in machine learning.

It’s like, what the fuck is a predictive model, like what you talking about? When I didn’t really know anything about machine learning and people were talking about models, I was like, is it like a 3D model? Like, I don’t understand.

So we’re gonna talk about that later, but also, what are features, and what are labels. So if we start with labels, it’s kind of like, the name of how you would classify entries in your data. So if you wanted to know if something is a dog or a cat, not with images, just with descriptions.

So the labels would be dog and cat.

And then you would have features.

So it would be the characteristics of these entries in your data set.

So for example, you have like, cats are usually small where dogs can be big. Cats have whiskers, and dogs don’t really have whiskers. Dogs are more friendly than cats.

So you have your labels, dog and cat, and you have your characteristics, what makes, usually, a dog and a cat.

And then, when you feed all of that into an algorithm, you’re gonna create a model that is gonna be the mathematical representation of the outcome of your training.

So, you have like a huge dataset of descriptions of dogs and cats, and your model is gonna be, how would I represent mathematically what is a new dog or a new cat? So, yeah, I don’t know if that makes sense, but I tried. So it’s, that’s the thing.

If you play more and more with machine learning it might make a bit more sense, but it’s basically just like math trying to understand data that you give it. So supervised learning, an example of how you would use it is to predict the price of a house.

You have a huge data around houses and you know how many rooms they have, or how many windows, or if they have a carpet or not, or if they have two levels.

And you would know their price, and you could feed all of that into a predictive model to give you the new price of a house.

But inside supervised learning you have different types of problems that you can solve. So you have usually classification and regression. So classification is when you want to predict a certain class.

So in our dog and cat, that’s the classes.

You have either it’s a dog, or it’s a cat, and that’s it. In regression, you want more of a continuous response where, like, the price of a house is not really a class that you’re gonna spit out at the end, it’s more gonna be a prediction of an average price. It doesn’t really tell you, either this house is only 500 thousand or two million, it’s gonna take all the data of different houses and give you kind of like a price that it finds in-between. So it fits around the line where the classification is really gonna be either one class or the other. And then you have unsupervised learning.

And it’s the ability to create predictions based only on a set of features, and this is the part that is important and that is different than supervised is that, you don’t have the labels, you only have the features. And what it is good for usually is predicting customer behaviour.

And the way you do that is that you type, you solve clustering problems.

It’s where you try to group similar things together. So, for example, if you work for, I don’t know, Willy’s or Kohls, and you have data from all of your customers. You know at what time they come, you know what they buy, you probably know like, I don’t know, you know whether they buy more at certain time of the year, or you don’t know what kind of products they buy, so you can put them into groups and you can try to understand patterns in the way they behave so you can predict maybe what you would advertise to them in the future based on people that were in the similar cluster. And then even in these type of algorithms and problems, well, you have different types of algorithms. And so this is not the complete list, at all. But different algorithms are better at different things. Naive Bayes and K-nearest neighbour solve classification problems, usually, not really regression.

A linear regression, well, is like a regression algorithm. But the two at the bottom, convolutional neural network and long short-term memory are more around figuring out images, and speech recognition, and things like that.

So you wouldn’t use a convolutional neural network if you just want to know if a fruit is a banana or an apple.

That wouldn’t really, that would be overkill. So I’m gonna touch very quickly about neural networks because I think you can find resources online that explain it a lot better, and I did the link there to one of the video that I think does it a lot better, and it explains how neural networks are used to understand images.

But it’s basically a network of nodes interconnected to each other with a lot of layers in-between and you don’t really know what’s happening, how it figures out, but you give it input, and it gives you an output.

That was a very basic definition, I told you I would not go into it.

It’s like, it, I could, but I think that could be an entire talk by itself. Choosing the right algorithm.

So, I already told you it depends on what type of problem that you are trying to fix. If it’s only, if you’re trying to just figure out classes, or if you’re trying to predict a price, but it is also, what is your data set made of? If you have a lot of images, I told you, you probably want a convolutional neural network, rather than a Naive Bayes.

Naive Bayes I don’t think couldn’t even understand images, it’s too much.

But if you have text, then Naive Bayes is actually pretty good at that, and if you have speech, then a long short term memory. So, before you start doing that, you can kind of already go into different directions instead of looking at every possible algorithm. So it’s more of an art than a science for different reasons. As I said, sometimes, so a few algorithms can do the same thing, but you don’t really know before trying them which one is gonna be the right one. And then after that, sometimes it gives you a prediction that’s not really accurate, and you might not know straightaway why.

So you probably will never stop at your first prediction. You do a prediction, and then maybe the accuracy is only 0.6, it’s between zero and one usually, and you wanna get as close to one as possible. But it might be your data that’s not really good, it might be the amount of data that’s not enough. It might be the parameters that you pass into your algorithms that are not the right one. There’s what we call activation functions, that I could not explain, but there’s different types of one, and depending on the one that you’re using, it improves the accuracy on it as well.

So, it is a thing where you have to be a bit patient, and you have to try different things and see what works. So different applications, I’m sure that you kind of already can think of ways that you can apply machine learning in your day to day life with things that you can buy that use machine learning, but I wanted to show a bit of, a few different examples to show a bit like how much it can do.

So in home automation, you have devices that you can buy like the Google Home or the Alexa that uses machine learning by just like, speech recognition, and you can get information and stuff like that. But you can build your own little devices as well. So this is an opensource project that uses Raspberry Pi and a camera, and you train it to recognise your own gestures, and you can then connect it to the rest of your appliances in your house.

So I’m talking about this one because you can run JavaScript on a Raspberry Pi, so you can probably do all of this in JavaScript. And you don’t have to buy something in store, you can make your own.

Art, I don’t know if you follow the news, I had to show this one.

Usually I love the mix of art and technology so I used to show other ones, but I think this one is interesting.

If you haven’t followed the news, this piece of art was generated by a neural network, so an opensource piece of code, and it went for sale for 400 thousand dollars. So when you think that oh, you know, art is useless, well you can make some extra cash just by running a neural network.

And I think there was a big controversy around that because the people who made the money, so the people who got, the collective who got the money actually got a piece of code written by somebody else. So then you had this huge debate around, well, if it’s opensource, you can reuse it, or also, well, if it’s art made by a computer, is it really art, is it worth 400 thousand dollars, and things like that.

But if you, not talking about money, playing around with art and technology is usually a way where you can get better at training neural networks, as well, because it generates a new image all the time. So in a way it is a unique piece of art, ’cause even if you retraining the (muffled) then with the same dataset, it would come up with something else.

That’s a classic one.

Optical character recognition, and it’s actually the data set that is available online, where you have thousands and thousands of numbers, and you can, usually this is how people get started using neural networks, where, because you can, so you know, as a human, that these are zero, one, two, three, four, five, six, seven, eight, nine.

But a computer could see the first zero and would not understand what the second zero is because of the way, like, computers, I mean, brains are much better.

But you can understand how to build your first neural network by using this, and then maybe build your own, like, depending on who you work for, maybe you do need that kind of things where you would scan pieces of paper that are handwritten, and maybe make sense of the text that’s in it. Oh, this is one of my favourite.

So this is a prototype.

So this developer, I totally forgot his name, he was thinking about the fact that we say that speech interfaces are the future, but what about the people who can’t speak? And he used the camera of his laptop to train a network to understand certain custom gestures to be able to communicate with Alexa with sign language. So I think it only can understand a few words at the moment, as I said, it is a prototype, but it is interesting to know that you can actually use your camera, a TensorFlow in the browser, and then connect that to Alexa.

And this is like one of an example that I think was really useful, it’s Sarah Drasner’s experiment around generating, maybe this GIF is a bit too fast.

But basically you give it an image, it’s to write alt text, or for alt tags.

And it generates.

So it understands what’s in the picture and it generates text that you can then add in your tags in your HTML.

So of course here there’s a UI, but you could just run that in the background and just generate stuff for you very easily. So that’s a way that you can use machine learning in the browser.

So as we’re talking about the browser, let’s dive a bit deeper now in machine learning in JavaScript.

So, why? I know that if you do know data scientists, they will tell you like why, JavaScript for machine learning, you’re crazy, you cannot think, because yes, Python is better, it is usually the default language.

It runs faster, it can handle a lot more data crunching kind of thing, but I like the fact that we can do it in JavaScript because it is accessible, and it is an easier learning curve.

I think that JavaScript is an awesome community, because there’s a lot of things that we can do. We can make music, we can play with hardware. There’s a lot of things that can be done, and learning JavaScript is usually a bit easier than other languages.

So if it only, even for the fact that it means that people like us who do web stuff can actually get started and understand machine learning concepts, like I only like it even just for that.

Yes of course you might not go to production with TensorFlow.js, but you can still get started. You can check that your concept works and then potentially move back to Python.

But there are things that do work in the browser that could go to production.

Second there is a big ecosystem, this tends to be more and more not just modules and JavaScript libraries that allow you to do machine learning in the browser.

So there’s always pros and cons.

Like it’s the same when you develop anything, sometimes Python is good at something and sometimes JavaScript at another.

But you have people building modules that you can use now that start to support different algorithms. Sometimes TensorFlow just doesn’t have everything, but you have modules that can cover a different types of algorithm.

And finally, why not? Like, if it’s possible, why not do it? You don’t have to do it, but it’s fun, and it’s there. So yeah, data scientists will probably hate me for saying that, but yeah, you’re new.

JavaScript! So, yes, why not.

So what can you do with machine learning in the browser? So you can do three things, which are basically the main things.

So you can import an existing pre-trained model. So you just import it and then you write your code, but you basically don’t have to do much.

You just import it the way it is, and you don’t add any extra data to it.

It’s just, oh, it’s the perfect model for me and it’s already there, and you just import it, you run it, and that’s fine.

You can re-train an imported model, that’s what we call transfer learning, and that’s pretty fast as well.

It’s like if you have a trained model that has most of the stuff that you want, but you wanna add a few extra pieces of data to it, you can do that in the browser.

And finally you can do the whole thing.

You can create your model, train it, and run it, entirely in your browser.

So, that is really cool.

So the tools that you can use, there’s a few different frameworks.

So of course TensorFlow.js, you have Keras.js which is port of Keras in Python, which is something on top of TensorFlow, it’s like, do-do-do-do.

And that is really cool as well, I haven’t used it, but I think people were pretty excited about seeing it in JavaScript.

You have ml5, which, they started not that long ago. So TensorFlow can be a bit complicated, if it’s your first time, so I would advise to check ml5, they try to create quite a few examples, and they make the code look pretty easy.

And then the one in the middle with like an M, it’s Magenta.js, it’s more for music machine learning. And then you have, you can leverage the platforms that you know, Google, Amazon and Microsoft have around that, because they do have pre-trained model, it’s everything and in a Cloud, and you can leverage that, as well, to start doing your experiments.

(whimpers) Okay, so, demos. So as I said, I did build stuff on purpose. And it should work, because I tried it, but I took a risk, and I’m not sure that you’re gonna like it, that’s why I’m nervous.

So the first thing, so I wanted to try and cover the three different things that you can do in the browser. So let’s start with, I think I said, using a pre-trained model.

So. (exhales)

I made a willy detector. (laughter from audience)

Okay, so I’m not talking about real pictures, I’m not talking about real pictures.

You can stay.

So, there’s a reason behind that.

Might not be a good one, but there’s a reason. There’s another think I wanted to build that involved allowing people to draw on a canvas, and display that image somewhere else.

And the thing is, I know that if I ask people, please don’t draw willies, they will.

Because, I’ll tell you later, like, 100%, people will. So I was like, okay, how do I just prevent them from drawing willies? And I was thinking well, you know, convolutional network, you can understand images, and if you draw on a Canva you can download it as image, okay, let’s try it. Okay, so I’m gonna show it, and then I’m gonna explain the process.

So when I tried it this morning, it didn’t work from here, but it worked locally. So, I’m gonna try.

So, if I draw, for example, a candle.

That’s not a willy, that’s a candle.

And it should tell me, not willy! Okay, so that’s good. (laughter from audience)

That’s good.

Wait, because now.

(grunts) I’m sorry, this is horrible.

(laughter from audience) Willy! Okay. (laughing)

(applause) But, of course, I totally rehearsed this example, because if I do stuff like that, it’s, oh, actually, oh, wait, so if I, what if I do that? Okay, see for example.

So there’s a reason why it misunderstood, and this is where, let’s go back into the, wait, boop, the steps of how I made this.

So, the first step in machine learning, (laughter from audience) I know, I’m gonna try and go fast, I don’t wanna make anybody uncomfortable.

So you have to do data collection.

And if you have tried the Google exper – I can’t believe I’m, okay.

So if you have seen the Google experiment called Quick Draw, they made it on purpose, so you were timed, and they were like oh, draw a blueberry, or a baguette, or a crocodile, and you had 10 seconds. And they actually used everybody’s drawing to create this massive dataset of 50 million doodles. So 50 million, right? The thing is, I have a job that does not involve drawing willies, so I only drew, I drew 40, but I had to draw 40 a few times because I fucked up a few times.

So I actually drew a few hundreds.

But in my dataset that I’m using in the model at the moment there’s only 40.

And the thing is, if I’m only using 40 willies, I have to use 40 of the other classes that I’m using. So I didn’t use the whole thing ’cause I didn’t wanna split the 50 million into 40, so I only used the candle.

That’s why, you know, I’m telling you, I’m an imposter. But it still works, so the accuracy of that model, on willy or candle, is 0.9, so it works right, but then there was an issue when I tried to add other classes, the accuracy was going down, because I do think that I need way more samples than 40. So when you, (groans).

After data processing, I mean, after data collection, you need to do data processing.

So when I draw a willy on my screen, the size of the canvas is 200 by 200.

But in the Quick Draw dataset, the size of the doodles are 28 by 28, because the more pixels that you have, the more inputs in your neural network you’re gonna have, and it’s gonna last forever, and I don’t have that much time.

So I just used, the model in the background, I wrote it in Python, but I’m not gonna show you the code ’cause we wanna talk about JavaScript.

But I basically wrote a few things using an open CD to crop the image and resize it down to 28 by 28 so I could just use it with the Quick Draw dataset. After that you have to do splitting.

Oh, okay, shit, I was using baseballs before. Okay, well I should have replaced that with candles, but. So you have to do a splitting.

So when you have your dataset, 100% of it, you have to use 80% for training and 20% for testing. So 80% you’re gonna load the images, fit it into your algorithm, and it’s on a generated model, and then what you’re gonna do is you’re gonna use 20% of the data that is already labelled, you already know what is a candle and what is a willy and whatever, and you’re gonna test your model against that to check the accuracy.

And then, because if, that allows you to kind of know in advance the accuracy of it, because you already know your data.

So once you do that, there’s a few extra steps. You have to choose your algorithm, and as I was working with images I used a convolutional neural network using, so Keras in Python, I didn’t use TensorFlow. And then you have to train it, and you have to do parameter tuning, what we even call I think hyper-parameter tuning. That was what I was talking to you about where you have to change your, maybe your number of layers, your activation function, and things that I can’t explain.

And then you run your prediction.

And these kind of go in loop, if your prediction is not good enough then you change the parameters, and you train and you train and train.

So this is cool, but in the end you have a model. But in JavaScript, this is basically all you need. So I didn’t add the code for a canvas, so this code is more, if I was downloading the image that I created, the candle, so I download it, and then I display it on the page.

And you know, I’m asking, willy or not willy. So if we go there, so I, this is my model that I created.

So I can just give, like, the path.

And I load it, so this is with TensorFlow.js, you load your model from the pass that you gave, and you just refer the div that contains the image that you wanna predict.

And then you have these three different functions here. TensorFlow is gonna look at the data from the image div that you gave it, and then it’s gonna resize it to 28 by 28.

So the one that, like the new one that I drew, so that it’s still 200 by 200 at that time, and it’s gonna resize it down to 28 by 28 to fit in my model.

And the third line, I don’t remember.

But it has to be there.

And finally, in the end, you just predict from the model, and what’s inside the predict function is just the shape of the data that my model is expecting. So it’s not resizing, it’s just that it has to be, and I forgot why, I forgot where the first number is, but I know 28 by 28 is the size, and three is RGB, so the channels.

So it is expecting a certain shape of data. It could have been different if I changed my model, but I just didn’t.

And then you get your prediction label and you just run that and it should be able to give you the label that you’re expecting for.

So usually it’s either like zero or one, and you know from your model is zero is willy or candle. So if you’re just using an image in your browser, this is all you need, like really, all you need. I did most of, ’cause I had the canvas code, but that’s it. So that was for the first one.

And then the second one that I built is not as stupid, it’s a teachable keyboard.

So this one is using a pre-trained model that I’m gonna do transfer learning on it.

So, I wanted to be able to write with my head. So.

I’m here, you see it okay? So if I start training, I finished that last night, so this, mm, okay, (squeaks) ooh, Jesus.

Okay, so there’s four classes, but you can add as many as you want.

I just wanted to just be able to do right left enter, and just neutral.

So I’m gonna go to the right, okay, and then I’m gonna go to the left.

And then I’m gonna go down.

And then I’m gonna do neutral.

So that’s not many example but I can test them, okay neutral, so if I go down, and left, and right, that’s pretty okay.

So then, ooh, okay.

So I can train on top of it but I’m just gonna stop there. And then if I go there, whee.

I can go down, yay.

And, well I’m not writing a word, I could have, yeah. Also the keyboard is awful and there is no space and enter, but.

(laughter from audience) Well, I, again, I do have a job.

So basically, so that’s it for that demo.

So you might just be like, well, I don’t care because I can type.

But I’m getting more and more interested in assistive technology, and allowing, leveraging new technologies to make experiences better for absolutely everybody.

So this is why, maybe, you probably won’t use it at home, but maybe it’s not about you, maybe some people could actually use stuff like that where in a few nights you can actually help somebody write something with their movements.

Of course it could be improved and things like that, but it means that in a few lines of code, you can still write a message without, like, you have people who just don’t have arms and they have to, like, I’ve watched videos where they had to sit on the chair and the way that they write is they write in Morse code where they have to tap, you know, Morse code is like dot and dashes. So left is, like, they tap a sensor, and it’s like, dot, or dash.

And like, I don’t know, if you could just use your camera, it is like an application that I think was a bit more useful than the drawings.

So the way this one works is a bit different. You import a model that is already pre-trained, it’s called the MobileNet module, so MobileNet is used on a lot of images already. And you import TensorFlow, and the algorithm picked for this one is KNN, so it’s a k-nearest neighbour classifier.

I’m not exactly sure but I used that from one of their examples that they were showing. So you have, you define your number of classes, so I have four because I have four gestures, but you could have six, you could have two, whatever. Image size, they wrote, from their example it must be 227, and I wonder if it’s because the size of the input in their model is 227, so the canvas of the video can’t be bigger than that. And TOPK 10, I don’t know.

And then, so this is my classes here, right, left, down, neutral, you can write whatever you want. And then, so the steps, so that is, it’s not all the code, but it’s just to show you that it actually is not that much more than that.

So you create an instance of your classifier, and you wait, you wait.

You load the MobileNet module, and you’re gonna get the pixel from the video frame, again. Yeah, I don’t know about that.

I mean, I know what it does, but I couldn’t explain what, is infer even a verb? Anyway, so I think you feed your new samples in the MobileNet module, and you add all of these examples in the classifier, so I will have to read the docs a bit more about that, I may be confused about this example.

But this is the code that you have to use.

And then you await for it to predict classes, and with just my array of classes the index comes back as between zero and three, so it can tell you if it’s right, left, and things like that.

And then once you’re done, it dispose of the images. It does not save your camera feed into Google’s MobileNet module.

It just removes the new samples.

So their model stays the same, but your additional classes disappears, all your actual frames.

Okay.

And then this is my third demo, and I made it last night, so I don’t know if it’s really gonna work.

(groans) Okay. So this one, I wanted to, I didn’t use TensorFlow.js for that one, I used ml5, because I wanted to see how much simpler their code is, and it actually is, I didn’t have to touch anything. I basically used it and I did something to one of my old projects, so that, this is, I didn’t make the game last night, I made it a couple of years ago where you could play pong with browser windows.

So you have browser windows popping up, and they’re the paddles, and you have a browser window that’s the ball in the middle. And I’m gonna try and do that with my head. And so it’s gonna train in the browser, but it is still using a pre-trained model but it’s training in the browser.

So yeah, so the part on the left is ugly because this is what I did last night.

So the model is loaded, it is using MobileNet as well. And the video is ready, that’s cool, okay, so. It is really stolen from that page, so that doesn’t look great.

So I’m in the middle, and I’m gonna add samples, let’s say I’m gonna add 10.

Ah, I shouldn’t have done like that.

What if I, so I’m gonna be here, and I’m gonna do that. And then I’m gonna be here, and I’m gonna do that.

Okay, so we train, oh, the font is not great. But as you can see it was training, and the loss, you expect it to be as small as possible, it means that it’s kind of being trained properly. And if I, yeah, that was expected, I forgot. All right, so…

Okay, so it’s popping up, and ooh, okay, yay, yah, ahh, I’m losing! There’s also a bug in my game, I don’t know which way I’m supposed to go.

Oh, it’s always inverted, right.

That was gonna be a problem.

So the bug in the game is that I think I fucked up somewhere, and as you go, (laughter from audience) as you go, it’s going faster.

And, ahh, see! And you don’t know, usually that sound, and the sound is not working anymore, I tried to fix it this morning but, and I think the points stop when I’m at 15, but I won’t go that much because I’m tired. No.

I don’t know what I’m doing! Okay.

So, see? See? It’s going faster, it’s like ahh, woo! Oh, actually, wait, I’m gonna lose, so let me just, should I lose? I’m gonna lose.

I don’t know what happens, I’ve forgotten what I did when I lose.

(breathing hard) All right, just, just, just I’m tired now.

But that was it, so.

(applause) What am I doing? Hello? Oh, okay. Yes, that’s a good sample.

So this one, I’m not gonna go very much in detail into it because it is exactly the piece of code that’s on ml5.js, but basically you have a few variables that you declare in ml5.

It’s using P5.js, which kind of relies on processing a little bit, so it makes all of that easier, so you have a set of function that runs once and you have your door function that keeps running. So you just load your mobile net and you kind of feed the video into it, and you keep feeding it in the draw function. You run a prediction, like honestly, I’m not gonna, I’ll think I’ll have to go, ’cause I still have, I mean, I don’t have to go. But I have to keep going because I don’t have enough time. I did that last night, so I knew that it would be a bit too much.

Limits.

All that stuff is not that useful, but cool, so limits. You need a large amount of data except if you use a pre-trained model.

As I told you, but the thing is, depending on what you want to do, you might not find a model that is pre-trained, so you might have to generate your own data. And as you could see in the Quick Draw dataset, they have 50 million images.

So you might work in a company that has a lot of data about your customers or about things like that, and you might be able to use that easily to experiment. But otherwise if you are dealing with images, like, that’s a lot of them.

It takes a lot of time to train your own model. So in the browser here it was fast because I used pre-trained models, but even in Python it can take a while, especially the more data you have, the longer it’s gonna take, and if you do that in the browser, like, you might not want to do that, to train it all in the browser.

So just be careful about that.

Think about the mobile experience.

If you are doing something like JavaScript on a website, people will run it on their mobile as well. So think about the type of experiments that you want to run, this is where using a pre-trained model would be useful, because it wouldn’t add any lag to the experience of the user, but if you want to train something, like, really ask yourself, do I have to train my model in the browser. Liability, and that is just like machine learning in general. Sometimes your, neural networks especially, the models that you create there are black boxes, you don’t really know what’s happening inside. You don’t really know how they predicted what they predicted.

So depending on the type of stuff that you wanna predict, you wanna make sure that you know why, because if you, for example, if you do it for detection, and you falsely accuse somebody and then they sue you, and you don’t know why the algorithm told you that, then it’s not great, like you’re gonna be sued anyway. So make sure that you kind of like, save yourself, a little bit.

And finally, bias and ethics, because I do think that that should be mentioned in every machine learning talk.

Because algorithms are not biassed, we are, and when you see articles online that tell you like, oh, this algorithm is sexist, it’s not the algorithm. The algorithm is just a piece of math.

We are, as people, we are sexist and we are racist, and what we build will be, as well.

Like the machine is not gonna magically save society. So if you feed data that is, you know, sexist, then your algorithm is gonna be as well.

And this is why we shouldn’t push the responsibility on a piece of math or on a piece of code.

It should be just us making sure that we, if we are bitter at the society then the algorithms are gonna be as well.

And one of the example here, that’s probably be too small. But it’s in Google Translate, you have languages that don’t have a gender, right? Like English.

But you also, I mean, gender, what I mean.

So in Turkish, I don’t know Turkish, but I think that you have certain professions and whatever, and they don’t have a gender, it’s just like a job.

And I think in the English translation that Google Translate did, it’s like, she is a cook, he is an engineer.

And of course you have some that are little bit better, but it’s kinda, I think there was one, yeah, he is hardworking, she’s lazy.

I was like, are you serious! (laughter) But that’s the thing, it’s like, it’s not the algorithm that is sexist, it’s that Google Translate uses data that it finds on the Internet, and it’s stuff that we write. So the topic of bias and ethics is much larger than that. You have stuff done with image recognition that are just ridiculous.

So it’s a really interesting topic that you should look into if you’re interested, I think I did a few links at the end.

But it’s definitely not to be forgotten, I think it’s something that we should definitely think about when we are building, no matter what type of algorithms we are writing. So how to get started.

I think, I tried to put these steps in like an order. I would check ml5.js first, because as I said, they try to explain things pretty quickly, and they give you the piece of code, and it might be a little bit magic.

So if you want to do something a bit more specific you might have to default back to something like TensorFlow.js or Keris.js, but to get started and to understand concept, starting with ml5.js is pretty good. Starting small, there is nothing bad about starting with just trying to recognise an orange and a banana depending on having data, like, oh, you know, colour, shape, and whatever, like just doing really small stuff to understand the concept.

If you are more interested in images, you could follow the basic example of the handwritten characters recognition that you have a lot of tutorials around, and you start to understand the basics of what is a neural network, how do you deal with the layers, the weights between neurons and stuff like that. But I would advise to try a few different classification algorithms.

Naive Bayes and k-nearest neighbour are ones that I used quite a lot, and they can actually fix quite a lot of problems. You don’t have to go to a neural network straightaway because it sounds fancy.

And try to replicate a project.

A lot of the times when I talk to people and I try to motivate them to get started with stuff that they don’t know, they’re like, oh, but I don’t have a new idea, it’s been done before, it doesn’t matter if it’s been done before or not.

If you haven’t done it, then you don’t know what you have to learn from there, and then you can use that knowledge to create new things if you want later.

But there is nothing wrong at all with trying to replicate a project.

A few tips.

So, I would say avoid training your model in the browser if possible, because if it is really slow, if you don’t have like a real reason why, I would say maybe don’t or not at that time. As things get better, maybe it will get better. Don’t forget to retrain your model regularly. And that, it depends on what you are trying to build, but for example if you wanted to predict the prices of apples in a store, for example. At a certain, you have data from how much it cost over the years and seasons and stuff, that’s cool. But the thing is that you have events that happen in the world where the price is gonna go up at a certain time, and if you don’t feed that new data to your algorithm, your next predictions are gonna be wrong. So again, depending on what you’re trying to predict, retrain your model regularly.

Be careful of what we call overfitting, and this is a term that you’ll get used to seeing if you start getting interested in that.

But basically overfitting is when your model gets too used to your training set.

When I said that the accuracy, you try to get as close to one as possible, if you get one, usually it’s wrong.

Because it knows your dataset too much, and it doesn’t, it’s not able to predict new things correctly. So you want to get as close, like 0.9, 0.95, but if you are overfitting, something is wrong with your algorithm.

Think about mobile, I mentioned it before, but I even forget it myself at some point.

Yes, if you make a website, people are gonna check it on mobile, so make sure that it doesn’t lag or that it doesn’t make the experience shit. And check the quality of your training data. When I said that you need a lot of data, a lot of shit data will make a shit algorithm. At some point I was working on, I was prototyping something where we wanted to do sentiment analysis on pieces of text.

And we used a dataset that was opensource that used Tweets. And you had Tweets, and you have the sentiment. So it was already labelled, so we knew that it would be a supervised learning and a classification problem. But what we didn’t realise, ’cause we didn’t read the 14 thousand rows of Tweets is that it was really terrible.

Like the way it was labelled was really bad. Like if somebody was Tweeting something about depression, the label would be like happy, I don’t know, it was just like no, no, obviously it’s not happy. But the thing is if you don’t read your data and if you don’t clean it, then you will not get anywhere. So it’s not only about the algorithm, it’s also about checking the data that you have. A few different ideas, if you don’t really know what to build.

So of course there’s like games, as I showed, kinda like the, it’s not maybe useful, but maybe if you’re in advertising, machine learning and games in the browser for campaigns, you can definitely do that.

Accessibility tools, as I said, if you use a camera, or even as it’s JavaScripts it can run on hardware, so if you use wearables you could make your own, your own custom experiences or gestures, and allow the interaction with the browser for people who can’t, like maybe who can’t type or who can’t move as easily as we can do.

Design tools, I saw a few examples where they use machine learning to form pairings so you won’t have to do it yourself.

So they ran through a lot of forums and a lot of articles and they saw kind of like what worked well together, or even how colours work together as well.

OCR is optical character recognition that I mentioned a few times.

Sentiment recognition, if you are working with pieces of text, comments, for example, on a site, and you wanna make sure that hateful comments are actually not displayed, that’s a thing that you can do. Fraud detection, people do that with pictures, because I think, so, I don’t really know how it works, but I think when you try to do a claim, usually you have to add pictures in and they can usually understand the patterns of fraudulent claims and understand that.

And dot dot dot means well, there’s a lot more, obviously. That was just when I quit thinking about.

So very quickly, ’cause I have no idea how long I’ve talked for, resources.

So there’s actually, so when I told you there was quite a big ecosystem, you have a lot of tools that you can use that are written in JavaScript.

Not all of them are written in JavaScript, I think just at the end, Jupyter Notebooks, it’s just because they’re quite famous when you do Python. It’s like, if you want to try Python, you don’t have to instal the whole stuff locally, you can use Jupyter Notebook and run your algorithms in the browser.

So this is quite nice.

Useful links is just a few courses and a few sites where I found inspiration.

And there’s also the really good one around the Ethics and Governance of Artificial Intelligence, which is a series of videos from MIT that I found really fascinating.

They kind of bring up problems that you wouldn’t even think of, but then when you think back, you’re like, oh, well yeah, we definitely need to do something about ethics. And datasets.

So what is really cool, you can find datasets opensource that you can use to get started.

Object detection is a cool one as well, I haven’t played with it but I’ve seen a few experiments. And you have pre-trained models for TensorFlow.js that you can reuse, like I’ve reused one, but there’s a few others, as well.

So that was it for the day, I think I spoke really fast, I’m really thirsty.

But yes.

So, if you have any question, I’ll be around, I will have to go for just an hour, I have to go back to work for something, but I’ll be back. Otherwise, I’m always on Twitter, devdevcharlie if you have any question, and if you have any feedback.

That’s the first time I gave that talk, so if you think I should never give it again, you can tell me.

Thank you very much.

(applause) (upbeat music)

Join the conversation!

Your email address will not be published. Required fields are marked *

No comment yet.