Artificial Intelligence 101

Every industry will be affected by AI, machine learning and voice interfaces in the coming years. Terms like “neural networks” and “deep learning” often sound complicated and sci-fi, but never fear! There are platforms and technologies out there today that can enable you to do a whole lot out of the box upon which you can build.

Patrick will give you a crash course in AI — covering common terms, how you can get started with existing services and APIs, and how you can take all of this and apply it to your own business or idea.

(bouncy, playful music) (applause) – Thank you everybody.

So, I’m gonna put my speaker notes on ’cause that’s important, so I don’t go crazy over time ’cause I got a lot of stuff I wanna say and the entire area of artificial intelligence could seriously take up an entire conference, so someone should get on that, John.

So yeah, John explains generally what I will be speaking about so I can go to the next slide already.

Basically, artificial intelligence is learning faster than you’ll probably realise.

So I’m gonna start out with a bit of a test. I’m gonna have four different videos pop up on the screen, you guys have to guess which one isn’t generated by artificial intelligence.

– To help families refinance their homes, to invest in things like high-tech manufacturing, clean energy, and the infrastructure that creates good, new jobs, not to mention the job training that helps folks earn new skills to fill those jobs.

To help families refinance their homes, to invest in things…

– [Presenter] Do we all think we know which one’s the real one? Cool, it was a trick question.

They’re all generated by artificial intelligence. (audience laughs) So, good work everybody, you guessed that one. So, AI’s getting pretty good at some of that stuff.

I won’t go into exactly how that all works just yet. But, generating audio isn’t quite as promising. So, there’s another research group.

Actually, it’s a start-up.

– [Robotic Barack Obama] Have you heard about this new technology? – [Robotic Donald Trump] Are you speaking about this new algorithm to drop voices? – [Robotic Obama] Yes, it is developed by a start-up called Lyrebird.

– [Robotic Trump] This is huge, it can make us say anything now, really anything. – [Robotic Obama] The good news is they will offer the technology to anyone.

– [Robotic Trump] This is huge, how does their technology work? – [Robotic Hillary Clinton] Hey guys, I think that they use deep learning and artificial neural networks. – [Robotic Obama] Hey guys, Hillary is right, and I can tell you that their team is great. – [Robotic Trump] I wish them good luck, I’m sure they will do a good job.

– [Presenter] Cool, so that’s by a group called Lyrebird. It’s still a bit robotic sounding but they’re getting there.

That isn’t them sitting behind a keyboard and literally programming every syllable.

They just played a bunch of tracks of somebody speaking, and then it can kind of mimic them and it kinda worked out.

And it gets a lot, so it’s not just like it kinda sounds like them, it starts getting their intonations.

So you don’t just have a robotic monotone voice, still slightly kind of robotic sounding, but there’s a lot more going on there.

So, why now? Why artificial intelligence all of a sudden? You’re probably seeing it a lot more in the news, and you were probably like “What, I thought we already got over that?” and sad that we aren’t ever gonna achieve that. Well, firstly, we have so much data now, and more data than I thought we had.

Every day, we create 2.5 quintillion bytes of data, which is basically 90% of the data in the world today has been created in the last two years alone. And that was, I think, last year, technically ’cause it was a report on previous years.

And it’s not slowing down, it’s getting more. We’re generating crazy, crazy amounts of data. Every two years we generate 10 times as much data, at least. So, that’s a lot of storage of data.

But luckily, good AI needs good data, so we need lots of data, ’cause the more data we give it, just like giving that speech generation a whole bunch of data on somebody speaking, that’s how it’s able to speak really well, good AI needs plenty of good data.

So, if anybody got a new Samsung phone and were told about how amazing the new kind of speech assistant thing was that they were gonna get to use, it has its own dedicated button on the back, it didn’t get released when it first came out. It was released Korea, on the Korean version that came out but Samsung held off for awhile on releasing their English version, not because they weren’t technically capable of making a really good assistant, it was because they didn’t have enough data on people speaking English at the devices for them to be able to make it effective.

I think it’s out now, so if you do have a new Samsung phone, you should try it.

But that was an issue, and my own demo, so if you were playing around with Ask John at the conference, it’s missing that very thing as well. So it looked like that, but when I started this and I started building this, I didn’t have any data on what you would ask it, I literally had to just guess and be like, okay, cool, people are gonna be like “Hey, who’s John Allister?”, “Hey, where’s the bathrooms?”, those sort of things, and I programmed those into it. But, I kind of had to go in the manual way of data collection where I literally just asked Twitter and LinkedIn, and was like, “Hey, what would you ask at conferences?” to try and work out what to throw in there. And that’s kind of how I went about it at least. We’ll have a bit more data now that we’ve actually gone through and run it once. Amazon had the same issue.

So, the Amazon Echo, which is right here, isn’t technically out in Australia yet and it also can’t really speak in an Australian accent. They don’t have a lot of Australian speakers speaking at their devices, so they need Australian speakers. So if you want an easy job just speaking, go hang out at Amazon.

Second point is that GPUs are pretty great now. So, it’s pretty good for you guys, as being part of the web. There’s so much a browser can do now that it really couldn’t before.

And who would have thought that artificial intelligence is actually kind of getting some sort of benefit from graphics as well? But it’s meant that parallel processing is way faster, it’s way cheaper and more powerful and can do things that are exciting in virtual reality and stuff, which I’m really excited in.

But, artificial intelligence like, loves it. And, thirdly, there’s cloud processing.

So, you can kind of go off and Google, IBM, Amazon, Microsoft, you can go and kind of give them all of your data, and be like, “Hey you, deal with this.

“You have all the terabytes of data on your systems “and I will just use your services “and pay you a little bit of money for that privilege.”, so you didn’t have to go setting up crazy server farms in your garage.

Also, computing power, it’s getting cheaper overall, which is very handy because people like Microsoft are squeezing artificial intelligence onto a Raspberry Pi. So they’ve got a really, really, super tiny microprocessor there that might one day be able to run AI. And you may be thinking “Why do you even need that? “It sounds like a stupid idea.

“Surely we can already run that AI “using that cloud processing thing you just told me about.” and that’s true, but the next HoloLens augmented reality headset is gonna have a custom AI chip in there, and what that’s gonna do is it means rather than needing to go to the internet to ask for some sort of special intelligence around things like “I’m looking at that table, “what’s on the table?” or, “I want to know what’s going on around me, “what I’m seeing”, that custom AI chip would be able to do a lot of that on the headset itself rather than having to go off anywhere and do stuff. And when it comes to things like augmented reality, having smarts involved actually makes a huge difference. That’s the difference between, say, an object looking like it’s half off a table, and that sort of thing, to being really, kind of, realistic. And, if it wasn’t already incredible enough that we’re using GPUs to do all these really exciting things, having specific little kind of chips that run AI is gonna improve the performance as well, ’cause those things can be built specifically for doing artificial intelligence stuff.

And overall, it’s all going very quickly and I’m gonna explain a lot of it to you guys in as simple terms as humanly possible because it’s a really confusing world.

So, firstly, who am I, why should you care? I’m not a machine learning expert with a PhD, so I won’t be talking to you in those terms because I don’t know them, so that’s handy for me and it’s handy for you.

My goal is actually very, very different.

I spend my time trying to make it simple for developers and tech enthusiasts, which would be everybody here, to keep up with emerging technology like this, and take advantage of it.

And in the end, what that means is I want you to leave here and give this a go.

So I know that you were all told amazing things about the web and you’ve all got this idea now on how you’re gonna go off and try out service workers because they’re amazing and everybody’s doing it now, but you should also by trying out artificial intelligence because it is incredible.

I run that site called DevDiner, it covers a whole lot of stuff, it’s not just artificial intelligence, so come chat to me about VR, which is virtual reality, augmented reality, wearables, smart watches, IoT, artificial intelligence, of course, and robotics. And seriously, do come speak to me, either afterwards or, you see me around, speak to me on the internet. I’ve got my Twitter handle down there, @thatpatrickguy. I am keen to chat about all that sort of thing. So, it is all super crazy overwhelming.

You guys, Javascript frameworks, they change frequently enough, let alone working out what just happened this week in virtual reality and all of those things. So, I’ve got a weekly newsletter which I send out, you can find it on the DevDiner website, but basically, it’s emerging tech news on all those different topics I just mentioned in one-liners with links to just say “Hey, this is what happened”, so that rather than you spending your many days keeping track, I just did it so that you can pretend that you did and focus on all the other web stuff.

You can also email me, tweet at me, ask me questions. The one thing I want is for you to not see emerging technology as this bar that’s too high for you to get into where if you just look at it and say, “That’s impossible, I could never do that “because I don’t have a PhD or whatever”, it is actually something you can do.

Okay, so, let’s look at our artificial intelligence apocalypse because — – [Woman] County, located in Southeast Kansas, central United States.

– [Presenter] Thank you, that was very helpful. I’m going to mute you now.

(audience laughs) – [Woman] The mic is muted.

– [Presenter] Cool, thank you, that’s useful, thank you, Google.

Over 800,000 jobs were lost in the UK in the last 15 years which sounds pretty terrifying, right? We need jobs.

But, almost 3.5 million new ones have been created with technology.

And, that actually worked out pretty well for the UK recently.

So on average, each job created get about 10,000 more per annum than the lower-skilled, routine jobs that are replaced.

So overall, there is a pretty good boost to the economy from technology up ’til 2015.

But, that isn’t necessarily artificial intelligence. So that’s technology in general and all of the things that have just happened with technology.

So, that doesn’t mean it will always be the case. But so far, past trends aren’t showing that we’re all like, horribly losing jobs thanks to technology. We’re actually getting jobs which we hope continues. The industry overall is striving for this thing called Artificial General Intelligence and what that means, it sounds scarier than it is, it’s basically AI can come up and say “I can do anything you can do” and whatever you think you can do that makes you incredible as a human, artificial intelligence can do that too.

We’re not there yet.

And there is a reason that people are concerned, there are many reasons, actually, by artificial intelligence.

It’s not just jobs.

This is one of the main one that you hear a lot of is artificial intelligence is gonna take your job and you’re all like, “Haha, no, artificial intelligence can’t keep up with browser changes and debug things, you fool.” But there is more, there is a really, really good article from Wait But Why, it’s a really good blog that explains all of this artificial intelligence stuff and a great quote that I like from it, he said, “The only thing is, “in the grand spectrum of intelligence, “all humans, from the village idiot to Einstein, “are within a very small range, “so just after hitting village idiot level “and being declared to be artificial general intelligence, “it will suddenly be smarter than Einstein “and we won’t know what hit us.” and this is the idea which is that technology has an exponential growth.

So, a lot of people look at technology and they expect it to have advanced in the same time period as they just lived. So, you say, “The iPhone took 10 years “to get to this point now, so the next thing that’s as big as the iPhone will take 10 years.” and that’s not really true.

What we find a lot of the time with technology and advancements is it exponentially grows. So it won’t take 10 years for the next big iPhone revolution thing, whatever it is, it’ll be more like 5 years.

But, as humans, we tend to expect that everything just kind of evolves at the same rate which it doesn’t quite do.

That’s a blog post, you should definitely read it. It has wonderful diagrams and basically, that’s where we are now in human progress. And he says, “That’s all we’re gonna be.”.

So you’re basically there right now, and you think that the web is evolving rapidly and changing, but artificial intelligence is gonna be evolving even faster.

So, Elon Musk, he is both loved and despised and thought of as an idiot by the artificial intelligence community, depending on who you speak to.

And he said that that piece was excellent, but we actually face a double exponential rate that both the hardware and the software are evolving at an exponential rate.

So, he has even more concerns than most.

He actually thinks that we’re gonna hit it way faster with both of them rising up like crazy and that they’re both gonna feed each other and we’re gonna be freaking out.

He basically says it’s gonna be like a tidal wave that literally, one day it is gonna be kinda okay, and the next day, it’s gonna be, like, amazing, and we’re just gonna be, like, whoa.

So today, technology is this.

We’re literally just kind of needing lights in the footpath so that you all don’t walk into traffic because you’re all staring at your phones and we rely on technology, it’s not just phones, for huge portions of everything we do everyday. Literally, technology if you didn’t have it now, you would all be a bit lost.

If you honestly thought if I lost every bit of technology, you would be lost.

So, that is now, 2017, right? But think, 2007, that’s only 10 years ago.

That was the year they were saying that this new iPhone thing is stupid and you don’t need to get that, it’s a terrible idea, it doesn’t even run Flash. That is actually in that article.

If you Google that, it’s a really good read for later. Then, say, go 10 years back again and you know what was a big deal then? Netscape was a big deal then and look how proud he is of Netscape.

So things changed quite a bit in such a crazy short time. Look how excited they were about Windows 95. The world is changing.

Then go 10 years back again, and literally it wasn’t that big of a deal, we were all excited we could connect a mouse to a PC and that was gonna be a big deal with the new IBM. So, things are changing.

The vice-president of engineering said “If you’re not doing AI today, “don’t expect to be around in a few years.” and that counts for you guys too.

So, what I’m gonna show you today isn’t that hard to implement, but if your agency, if your company, whatever you do, if your core business isn’t thinking about artificial intelligence now, there is a competitor who will.

And if you’re the company who is trying to go up against artificial intelligence that is exponentially getting better, and you’re just trying to hardcode everything, chances are you’re gonna be blown out of the water. So here is my crude diagram of the exponential kind of growth there of technology and robots, and I kind of highlighted that little bit where we are now.

So that weird blob thing is supposed to be my wonderful interpretation of the flatworm. Basically, we recently were able to emulate a 1 milometer long flatworm brain.

That, in crazy scientific terms, the neurons in its brain, there are 302, so not a lot.

I think the human mind, I don’t even know how many the human mind has, but it’s like millions, billions, who knows, lots more. So, we kind of just hit that.

But next, we eventually emulate a mouse brain. Then we emulate a human brain, and then suddenly, we go way past that.

At that point, when we go past what humans can do, they see that as the singularity.

So when you hear tech people and they’re like, “Oh, singularity, we’re all screwed forever, “end of the world.”, that’s kind of what they mean. They mean technology is to the point where it surpassed us now and what we can do. The human mind, technology, that’s the general jist. How long is that? Well, they all kind of disagree.

Average is about 2040, 2045.

But other people say never, so some AI people might say that’s never gonna happen, we’re never gonna get there, we can barely make thing that work now, so how’re we gonna make fancy robots that can do all the things that we can do? But either way, if we do get there, people are worried about what’s gonna happen when we get to that point.

Because how do we know what’s happening when we get to that point? What would we comprehend as possible if we can’t actually do what this artificial intelligence can do? So, it’s very hard to know.

But, hold on guys, did you see this? Did you see the Facebook robot shut down after invented its own language? Yeah, it was an artificial intelligence emergency. (audience laughs) This is terrifying, don’t laugh.

It had to shut down rogue AI programmes which started talking to each other.

Literally, it’s freaking everybody out guys, that’s not good.

Hold on, no, no, don’t worry, it’s all okay. Wired said it’s not gonna take over the world. Mashable getting irritated, saying “Stop saying Facebook’s bots invented a new language”. How many people got that shared to their Facebook feed saying we’re all gonna die? This is what the bots were saying to each other. So it’s basically two bots and they’re trained to try and negotiate.

The whole idea of the bot was they wanted to see if they could make a chatbot, artificial intelligence bot that could learn to negotiate like humans do.

Yeah, so, the jist of this is that they were trying to work out ways of saying things and so I think when they say to me to me to me to me, that’s like them trying to represent multiple balls or something like that, and that was more efficient. And so Facebook didn’t shut it down because they were all worried that we were going to die, they shut it down because that wasn’t what it was intended to do and they were like, “this isn’t actually achieving anything.” and stopped it.

But, tell your family you love them, seriously. Text them, call them, right now.

Seriously, the danger isn’t actually like the movies. We aren’t likely to have crazy robots coming with lasers and killing you and your family. But think of things like computer viruses spreading faster than they ever had before. Think of ones that could adjust so that once security professionals learned about them and try to change them or fix them, that the bot can just work out stuff and then, off guys the virus again.

That is actually reasonable, that could actually happen. Our world right now, it is connected like crazy. So some of you got out of speeding tickets and stopped recently when the WannaCry virus infected all of the speeding cameras and red light cameras. That stuff, if AI gets involved, could totally happen. AI could totally go in and mess up everybody’s systems or do who-knows-what, that’s reasonable.

What if someone tricks artificial intelligence? An artificial intelligence being could be kind of okay, but kind of stupid, but sometimes artificial intelligence isn’t hard to trick.

So researchers were able to make AI see slightly different things in an image just by slightly tweaking the images and kind of fooling it to see shapes that weren’t really there.

So, it is possible right now to trick current AI systems, and that’s possible.

Already doing something horribly stupid already happened. You all downloaded this one because it was right at your target demographic. You got to have an AI that learns to chit-chat, talk sassy and trade selfies with you.

And these guys got a wonderful build from sponsors Twilio. Wasn’t Twilio’s fault, so they got to spend $1580, or negative, I don’t know how much their balance was before that, but it’s gone now.

And they were like, “What happened? “Why are we getting charged crazy amounts of money?” So, they added a prank feature to their app where you could be like “Hey, robot, prank my friend.

“This’ll be fun.” and a very wonderful young girl was like, “Cool, prank the robot that I have as my friend.” (audience laughs) So then the robots started talking to each other and pranking each other and having a conversation. At some point, it gets creepy and they’re both running out of time, and I don’t know what that means.

But they chatted for an hour, 15 messages every second. So yeah, that’s what can happen with artificial intelligence.

It doesn’t have to be chaotic movie-worthy stuff. It can just be programmers don’t think everything through sometimes. They miss things, and then weird stuff happens. People do what you don’t expect them to do. Or, what if rather than taking over, they just completely fail when we need them the most? So like this one here, did you see that robot? Security robot in DC quit its job by drowning itself in a fountain.

I liked that heading, I thought it was very good. So, we’re getting there, but these things will happen. Slightly more seriously, nanotechnology.

Getting technology to the the point where it can manipulate literally like the tiniest, tiniest parts of the matter in our world. We’re not there yet, it’s really fascinating. But by fascinating, I mean terrifying.

And well, it’s both at the same time.

So, being able to do cool things with molecular nanotechnology and being able to manipulate molecules is a massive human feat.

If we can do that, we can do anything.

We can cure diseases, we can make people live forever, we can do a lot.

Imagine if you could just change the matter of the world. But then, imagine if artificial intelligence could change the matter of the world to whatever it deemed was important at the time for whatever task it is that you programmed it to do. And imagine if that happens at the point after the singularity where we don’t understand what it’s doing anymore.

I don’t have a slide on it, but the Wait But Why article had a really good way of explaining this where he says that either we make ourselves extinct because this technology just does something horrible, spreads a crazy disease and we all die, or we become immortal and we all live forever and this AI does the opposite and just makes us all incredible, and we live wonderful lives.

Which way that goes, we don’t know.

It could go either way yet.

We literally don’t know.

And will we all stop developing AI because we don’t know? No, we’re just gonna keep going, and somebody’s gonna do it, so, sorry guys. Or, you’re welcome.

So, this whole world, it’s moving towards an AI future. It’s coming, everybody’s already ahead of this, everybody, everybody, well, not us.

So, Australia isn’t really big on the embracing AI. Australians aren’t really getting there yet. We’re kind of a bit iffy on the whole thing of getting into AI which is why you guys should hurry up and get there ’cause you don’t wanna be the one who isn’t along with AI, as we mentioned before.

So, let’s change that.

Crash course in AI terms, ’cause before I get into how you do it, I wanna make sure you understand a little bit, enough that if somebody comes at you and starts speaking these random terms, you kinda know what they’re on about.

So, artificial intelligence, that one’s an easy one. That’s just technology, it’s smart enough to learn on its own and do things, right? But intelligence itself is really hard to define. I’m not gonna go into that today, but just think on your way home.

What actually makes something intelligent? That’s a really complicated thing.

Is it the whole having emotions? Is it the whole kinda being able to reason and think of what’s gonna happen in the future? What is it, we don’t know.

So, right now, artificial intelligence is just an umbrella term, generally.

Everything I say somebody in the artificial intelligence community is gonna say I’m wrong and somebody’s gonna say I’m right.

So there’s a lot of kind of some people agree, some people don’t.

But generally, it’s used as an umbrella term for everything else.

Then we’ve got general AI and general AI is basically like the artificial general intelligence that I mentioned before, right.

It can generally do whatever, it can be like a human.

My favourite example of this is Gerd from Invader Zim. So he can watch TV, he laughs and has a great time. He eats cupcakes and muffins.

He gets really sad and upset and has emotions. He makes waffles, he steals makeup and goes crazy. It’s a good life he has.

And that’s general AI.

He can do anything, he is just as smart, intelligent, all terms being.

Narrow AI is what we’re doing now more of which is we’re trying this artificial intelligence to have one thing it’s really good at and we can teach it everything about this one thing. It can learn about this one thing and it’s gonna be great. – What is my purpose? – Pass the butter.

(Robotic whirring) Thank you.

– What is my purpose? – You pass butter.

– Oh my God.

– Yeah, welcome to the club, pal.

– [Presenter] So, that is artificial gen, more like, that is the really specific kind of AI. That’s my favourite example of that.

This is a real life one.

It doesn’t pass butter, but it’s designed to try and learn how to pick up things.

It was a loan in Carnegie Mellon.

Their computer science team was trying to teach robots how to learn about the world but without actually programming them to know what they were doing.

It was more, learn how to pick up things by just trying and seeing what successfully works and what doesn’t work.

And that is the sort of thing where I’m talking about. We can train AI to do one thing really good, as long as it’s only one thing.

Algorithms.

A lot of tech people will be like, “I know what algorithms are, that’s really easy.”. Other people won’t admit it, but you got no idea what that word means.

It could mean absolutely anything right now. It sounds math-like, and you didn’t get into the web for math stuff.

So, algorithms are basically a set of instructions that take data and use it to solve a problem. Basically, set of inputs, goes through a preset, a collection of tasks, that will give you an output that you wanna solve some problem that you had. Machine learning, this is where it gets slightly more confusing ’cause you hear that, and you’re like, “Yeah, machine learning is like artificial intelligence, “the artificial intelligence is learning.” Machine learning is basically a term you use for when you got algorithms but they learn from running that algorithm. So, you pass data, you learn from it, and then you make a prediction.

So rather than it just being a simple you have information, pass it through, and you got an answer, this is more you use these algorithms to go over this data, you didn’t tell it what you wanted specifically, but it learns from all that data and gives you a prediction about what it thinks that data is.

So, imagine you’re at a meetup and there’s free pizza but you don’t know what pizza is what, you’re just like, “What is this? “Is that like meat-lovers or something? “Am I gonna die if I eat the pepperoni one? “I’m allergic to this.”, and imagine we just had an app where we just pass in a whole bunch of images of pizza and we got it to examine all that data, it can learn about pizza.

And it would learn, basically, about images of pizza. You could then take a photo of pizza, say “Okay, tell me what’s in that pizza.”, and it could be like, “Okay, cool, from what you told me, “that pizza is 70% likely to be that one, “20% likely to be that one, “2% likely to be that one.”, that’s machine learning.

But, it only learns from the data that we give it. So we gave it images of pizza, but we didn’t tell it what the pizza was.

If we said, “What is in a margarita pizza, “and go examine that.”, it would be like, “I don’t know what you’re on about. “What do you mean?”, and so we are very narrow AI focused with that. But, with more training, you give it more data, you teach it what these actually bits of pizza is, then you can to Terminator level where you can analyse all of the data about the pizza at the meetup, know the ingredients, know how much is remaining on the pizza, and an estimation of how much time before that pizza is gone and you’re gonna be upset and yeah, everybody got there early.

Deep learning and neural networks, this is one level above that.

The idea behind deep learning and neural networks is that this is machine learning but with many layers. So, those many layers are called neural networks. Don’t worry, I’ll show you a bit of an example soon. And the process of running through those networks is called deep learning.

Some will use them simultaneously as like, the same idea. That’s just general gist of what they mean. Each node in the network, things that it’s passing through all those algorithms is called a neuron which is modelled after the brain. Not exactly how the brain works, ’cause we don’t know how the brain works.

But you know, we’re trying.

And as feeble humans, we do our best.

The idea of deep learning is to find patterns throughout these neurons.

So we have a collection of algorithms, we give it data, and it just goes through a whole network of different things to work out what patterns are in that data.

Every neuron assigns a weight to its input, so the input that it got.

And it says, “Okay, how right or wrong is that input “in comparison to what I’m trying to work out?”. All of these weights, they can adjust and change over time to try and help the network learn and work out what data is in each layer that it’s got.

Visually, imagine we got input and it’s separated out into, say, a few different parts. We analyse each one of those parts and work out overall, with some math, what the end result is of what it thinks that input is.

And we do that over and over and over again and the number of times we do it changes depending on what you’re actually trying to achieve. The more times you do it, potentially the better the algorithm can get. Do it too many times, and the algorithm is not too good anymore.

So there’s like a bit of a midway point.

The sum of all of those makes our result.

You don’t have to worry so much about how all this works, ’cause I’m gonna show you ways you can do it without actually worrying about that whatsoever. But it’s good to know the background of what’s going on. So in our pizza example, imagine you take that pizza image. What’s really going on in the background is it could be breaking it up into like, every pixel of that image.

Then slowly expanding out to see what’s around that part of the image.

Slowly but surely, with a few layers, it can kind of give you the gist of “Okay, this is what I think that pizza image is.”. It doesn’t have to be how I’m saying it in terms of breaking it into pixels.

It can be even more advanced ones where it, say, looks for edges in the image and then it goes out a bit from those edges to try and work out what is that feature there, like what is it? Is it an olive? Is it a bit of cheese? Then to work out specifically, “Okay, I know that ingredient.”.

And then, working out, that image, it doesn’t necessarily know that it’s cheese, it just knows “Yes, there were cheese things in the image “of that pizza, so it’s probably that one.”. Other ways it could do it, it could slightly alter the image to find edges, analyse all the colours in the image, and do it too.

All of that stuff could be deep learning.

The whole idea is that the engineers and programmers decide and work out different ways of manipulating that data to find what they want.

Facebook images do this all the time to your photos when you get tagged, and be like, hey, look, it automatically knew that you were in that image that you didn’t wanna be in. This a gist of how it works.

I can’t explain it to you, look how crazy that looks. But it goes through the image, goes through a whole bunch of manipulation, scanning it in different ways like I was showing you, and then kind of outputs the result.

They get 97.35% accuracy so far.

They can even tell if your back isn’t like, directly front-on.

If you’re looking somewhere else, they can still kind of work out it’s you, which is interesting, creepy.

And that’s Facebook DeepFace, if you wanna look at that later.

Google Translate, not a lot of people know but they advanced 10 years in one year just recently. So, cumulatively in the past 10 years when it introduced deep learning, they literally just, 10 years worth of work they improved on that exponentially already just in one year, with deep learning to introduce that. So they literally took what they had in Google Translate and were like, not needed anymore.

One year, they already beat their only thing. So, completely like, threw one out, added a new one. I’m not gonna show the entire video, but this is a deep learning wonderful thing from Google. Without trying to teach it how to walk.

But they didn’t tell it what walking was.

They didn’t tell it anything.

They gave it stick figures and the stick figure just learns. So it gets stuck, and it will kind of work it out, and adjust what it was doing, and these ones just worked out how to walk. They didn’t say move your legs like this, or this is how humans walk, they literally just said here is a shape, now just throw yourself in a direction and try to get to the other side.

And the network just had to work out from all these inputs, this is how I did it. And it looks ridiculous, but can you imagine what the news said? That was like, terrifying.

What, you’re teaching robots to walk? We’re all gonna die.

So, pros and cons.

So one downside is that neural networks become a black box. So, we don’t actually know a lot of the time what it’s doing because we didn’t say explicitly this is what we want you to do, we just said, here is an input, we’re gonna give you a set of things you can do with that input and you’re gonna work out yourself what you’re supposed to do.

So, we don’t know what it’s actually doing. There is research going on trying to look into how we can see what artificial intelligence is thinking in those steps, but we aren’t really there yet. Deep learning can go wrong.

That’s not it going wrong, but that is, if anybody used FaceApp, that aged me dramatically using deep learning. So they had a whole bunch of images of old people, and they were like, okay, cool, Now we can change anybody’s face into an old person’s face. They could also change it into all sorts of different faces. You could become female.

And they didn’t go through and have someone hardcoding that in to be like, okay, cool, he wants to look like an old person, quick, Photoshop.

They just got the artificial intelligence to do that. But, there was a downside to that.

In that they had a wonderful filter called hot and the hot filter didn’t actually make everybody hot. It made people slightly whiter.

And black people didn’t like that.

FaceApp apologised.

They said it’s an unfortunate side effect of the underlying neural network caused by the training set bias, not intended behaviour. From what I’ve told you guys, you kinda get the gist of what that means now, right? They didn’t feed it in lots of images of black people as well as white people when they were making this hot thing, they had, say, a whole bunch of pictures of Justin Bieber and Tom Cruise and whoever else is attractive, and that’s what gave them their end result. So they didn’t purposely go out and be like, “We’re going to make black people white”, they were like, “We’re gonna try and make people look attractive, let’s have fun.”.

So, how did they fix this problem? Does anybody know? No? It’s very tyrannical.

So they went and renamed the hot filter to the spark filter. Sorted, all is well now.

Okay, so, you guys know all about artificial intelligence now.

You know enough, but when you went out and you wanted to start building for the web, you didn’t build your own web browser, right? You didn’t go out an invent a framework or a cloud service so you could host it.

You didn’t go out and buy a server.

You don’t have to go out there and make your own artificial intelligence system or the next supercomputer.

You don’t need to do any of that stuff.

You can take advantage of products and services that will do some of this for you.

So you get to take advantage of what other people have already done.

I’m gonna show you three areas super quickly. Image recognition, voice interfaces, and cloud services. Firstly, image recognition.

So, my favourite is Clarifai if you wanna do really good image recognition. Clarifai.com is incredible.

It looks like that, and I’ll show you some quick examples. I had a chat with the founder of Clarifai on my website, so if you’re interested in him actually explaining it properly, the actual person who made it, check that out, ’cause he actually explains it quite well. He’ll give a bit more of a run down on what machine learning is and that sort of thing too. You can also go to demo.clarifai.com and you can experiment with this without having to do any coding whatsoever. And so it looks a bit like that.

And so what it does is it looks at an image and Clairfai have fed it tonnes and tonnes of images, they’ve done tonnes of that training that I already mentioned before so that they can automatically go through an image and kind of get the gist of what is most likely in an image. So all these numbers on the side, these are all probability wires of what it thinks it sees in these images.

These are all preset images so they’re more likely to be okay at seeing these images a bunch of times. It knows there’s elephants in there.

It knows there’s love and a woman, facial expressions. Hey, look, it’s John.

That’s not actually built in there, so that was me dragging it in.

Okay, cool, what does it think John is? And it sees this image as like a performance, a concert, so good work, John.

You’re bucking it out.

So, it knows a little bit.

It knows John’s a person, but it generally is kind of misinterpreting this a little bit, it probably hasn’t been fed a lot of images about presentations yet.

But it just still have a bit, like it knows it’s a performance and you are on stage.

So technically, you’re performing, just a little bit different.

But, the AI can do a lot more.

So, it can detect where faces are in an image. So if you wanted, say to crop an image and you want it to always be on a face, artificial intelligence can do that.

It can tell whether it’s a slightly risque image, or not. That’s fine.

Completely safe.

This is the most it gets, it’s 100% safe.

So, that’s all good.

It can work out colours in an image and say what is the most prominent colours and it’s not really how much does a colour appear. This is more, I think, trying to work out what is the most prominent general vibe of the image rather than just calculating based on counting pixels, ’cause that would just be more obvious.

And it’s got a whole bunch of different ones you can try. You can see what celebrity you look like, things like that.

I didn’t do that in this one ’cause you get to go and try it yourself.

But it can do demographics.

So John, you’re 28.

But yeah, it’s pretty close.

It’s only about 50% sure of that, but it’s estimate. Luckily, you are definitely masculine, which is good. It can also tell you what’s the focus of the image. So it knows that John’s standing there, that’s the important bit of this image.

You put a picture of me in there, and I’m a bit more iffy. I’m either a man or a woman.

I’m only just slightly a man, so I’m most likely a man.

Something about music, I don’t know why music or competition or recreation, but I’m fun.

So that’s fine.

It can work out, cool, I’m 23 apparently.

That’s good too.

It’s more okay with that idea.

Right there, it doesn’t think I’m feminine anymore which is good.

But I’m also most definitely white, and I’m not Asian or Hawaiian or anything like that. So it’s kind of getting this gist of it.

If I threw that other picture in there, suddenly I am very likely to be lid.

I don’t know why.

(audience laughs) But, yeah, it also thinks I’m slightly more feminine when I do this one.

But it thinks I’m 82, so we’ve gotten a neural network to trick another neural network into thinking it’s real, so that’s cool.

So all of this stuff you can use using Javascript. It is literally just an API.

So you can connect to the API, pass it in some images, and you can do all this stuff in your own app. You can build a web app which can just take in an image and give that sort of data back.

It doesn’t have to be a demo thing, you can do anything with this.

The service is available for you to use.

And it’s free for a certain amount of usage. For playing around and making prototypes, it’s totally fine. Secondly, voice interfaces.

I’m going quick ’cause we got 9 minutes left. Voice interfaces are a lot more useful than I first thought. I’m already used to asking that for things. Things like what’s the time, what’s news, turn on my lights, ’cause I have a spotlight, it doesn’t just turn on any light, it’s not magic.

What time is it in San Francisco is crazy useful. If you’re working across time zones you don’t have to worry about looking up stuff. If you ask a smart speaker to just be like, hey, what’s the time here? It’ll tell you, which is handy.

So let’s just try it quickly.

Alexa, what time is it in San Francisco? Work.

– [Alexa] The time in San Francisco is 9 PM on Thursday. – Cool, so that’s handy, right? But the whole idea of that is quite good, because society will get used to being able to just tell something to do something and find something for it.

They don’t have to think about how does it find it, what services it’s using, what website you go to. You just ask it, and it will tell you.

Chatbots aren’t necessarily AI, you got a lot of people being like, “Bah, Chatbots, they’ve been done forever, “that’s a stupid idea.”, but there are actually a good stepping stone and they do use AI.

And they’re kind of growing in how much they use AI. They use it for speech recognition, as you saw before. Remember, Amazon is hiring.

And the more things get smarter, the more we’re gonna expect to be able to just talk at them.

When something’s intelligent, you don’t gonna be like, “Okay, cool, I have to type at it.”, you will literally just be like, “Ugh, stupid thing, just do this thing.”, and it should just do this thing.

So, it also might be a stepping stone to real world action.

So, there is a chat called Woebot.

This is actually quite good, it’s a Messenger bot and it’s for people to talk at if you’re feeling a bit upset, you’re kinda stressed, you’re starting to feel a bit depressed, and a lot of people won’t actually go to a professional for that. They don’t want to go out and speak to a person yet, and Woebot can give them advice, it can stand by and be like, “Okay, cool, what’s making you feel this way?”. But also be there to kind of slowly encourage them to go see a professional, be like, “Okay, cool, you’re feeling down, “You’ve been feeling down for this many weeks, “maybe you should go see somebody.” And it’s more like friends, so rather than talking to somebody who’s real and feeling awkward, they just get to talk to a bot, and it’ll just kinda guide them through it. It doesn’t have to be super intelligent, so Do Not Pay is a wonderful service where you could say I got a parking ticket, and click automatically fight it, you can say why, chat and give it your ticket number for the parking, this is in the UK, say what it was for, what the street name was, what’s your name, that’s cool, it would give you a document to print it out and there’s that thing you’re gonna send to the government to get out out of that ticket. It saved people over 9.3 million dollars so far, disputed 375,000 parking tickets.

So, it doesn’t have to be smart.

You can make something that’s really obvious but just no one’s done it yet and that saves people time.

Nobody understands what steps they have to go through, but now they do.

Smart speakers, the Google Home here.

I won’t turn him on yet, he’ll start speaking. I’ll turn you off.

That was sold out at JB Hi-Fi.

That’s like, an empty thing, and that was only a few days after it came out last week. I think it was last week, or the week before. And they’re useful when you least expect it. I found it was incredible voice recognition when I was really tired and out of it and I didn’t wanna type, but I wanted to tell my phone to do something. Rather than trying to type and missing all the keys, I just spoke at it and Google Key just worked out when I meant, which is wonderful. And society’s gonna want this to be better, which is where AI comes in.

They’re not the solution to everything either, but I’m not going to go into that, because we’re running out of time.

So, all of these devices out there, ones that are coming soon, I don’t know how you pronounce that, or that, yeah, there are a lot of things out there now that are gonna be like smart speakers that are coming out. Everyone wants one.

Facebook’s making one soon, isn’t that gonna be good. And you only want to build for one thing, like you guys are all web people so you know you don’t want to make a separate thing for every platform that comes out and just guess that you get the right one.

You want to build for all of them.

So, the way that I built it was with one of these platforms. I’m gonna just race through these, but there’s wit.ai, there’s converse.ai, there’s recast.ai, there’s one from Microsoft, there’s an IBM conversation one, and there’s this one, which is the one I used, which is called api.ai.

And the reason I used that one is it exports to Alexa really easily.

And it also exports to Google Home and all of those really quick.

So, I like that.

I have The Great Intelligence here, I’m just gonna turn this on.

And so the Google Home version hasn’t been shown anywhere. I’ve shown the Alexa one once at an Alexa meetup. But this is the smartest intelligence you’ve ever seen. So just prepare.

I know I told you it wasn’t gonna advance fast, but I lied. So, Alexa? Ask The Great Intelligence what’s the time. You’re taking time in my talk, c’mon.

– [Alexa] Time is something humans work with all of the time and something that you all think you understand. You don’t understand time.

Defining time has proven to be impossible for humanity. I know time.

I know all about time.

You wouldn’t understand time.

– Cool, and so let’s ask Google Home.

– [Google] Mic’s back on.

– [Presenter] That’s good, thank you Google. So, Google, talk to The Great Intelligence. – [Google] Alright, let’s give the test version of The Great Intelligence.

– [Google’s Man] Good day.

– [Presenter] He’s really friendly.

So, let’s say, how are you? – [Google Man] Sorry, can you say that again? – [Presenter] How are you? – [Google Man] How is anyone really? It is such an intricate question.

So many aspects of that question to unravel. I’ve discussed it before with other humans. There are many things to say here.

So many things.

What a question that was.

– [Presenter] Yeah, cool, so then, how do I get to Sydney? – [Google Man] There are so many ways to get to Sydney. I know of lots of them.

The ways you get there, lots of ways, they’re all pretty good actually.

I know the fastest ways, the truly efficient ones. – [Presenter] Cool, goodbye.

So that’s The Great Intelligence and I used api.ai for that.

It can do many things, actually, it’s really good. And so, this bot that I made for Web Directions, it also uses that.

And you can speak to it on some browsers, it uses this thing called annyang.

If you haven’t tried that, it’s a Javascript API that let’s you have always on, so you can be like, “John, what’s coming up next?” and it will always listen and interpret that. It only works in WebKit for the moment, from what I can tell.

But I think that it’s gonna slowly get there, the more we start expecting to be able to talk to our browsers, the more we might actually have that everywhere. And I’m gonna go really quick through this, ’cause we’re really close to out of time.

But you basically have an agent in api.ai, which is your artificial intelligence, you have intents, which is basically what sort of sentence types do we expect. So we have a few default ones that it comes with, we create a new one, we say, okay, coffee, what are the different ways people can ask for coffee, potential responses that we want it to say, and then you also have small talk.

So you can enable it, and you have all these questions pre-built in api.ai like how are you and so on and so on.

And you can go in and put all those in so you can kind of have your bot give a personality. There’s a better way, which is if you imported in when you first create your chatbots if anyone’s gonna create a chatbot, you go to the bottom area there, it says prebuilt agents on the side, and click small talk, and that lets you just create a chatbot with all those built in from the start.

And then you can edit them and change them a lot more. Entities or variations, so what are the parameters of your sentence? They got system, developer, user, I’m not going to go into time with all those ’cause I’m running out of time. Time.

So, what are the different ways that people will ask about an event? So there’s words inside there of categories of time, which I don’t have, and it goes through all these things and these are the parameters that are different actions. So AI teaches it own ones.

So you say I want to learn about different event types. Day one, day two, what does that mean? And then responses again.

And these responses just say I can’t find anything. There’s a reason for that which is I’ve got, I’m just gonna skip a lot of these ’cause I’m totally out of time.

But I’ve got custom responses coming from a database. So you can have your own database of things, you can hand off all those responses to a server so that it handles what it’s gonna say back. And that down at the bottom there, there’s a thing saying fulfilment, use webhook. And you create a http address of where you want api.ai to send all of its information about what it’s just been asked and then you return back to JSON object with all the things you wanted to say.

And it does learn and interpret, so say, I write when is John speaking, but I have to go in and tell it everyone’s name. I say, John’s a name, so when I say “When is Patrick talking?” it understands what I mean.

It doesn’t say I don’t know, you never told me a Patrick. So, it’s pretty good for that, and it can be trained with user input.

So earlier, we did testing and it turns out Zero isn’t understood by this system. It started just returning the first speaker it could find. But using training now, I know that’s not what we expected, so instead, I put in his name as well as one of the training bits, it still doesn’t kinda get it, so we make sure his first name as well, and then it’s fine.

It knows who that is.

Thirdly, cloud services.

I’m just gonna say look into things like IBM Watson, bluemix is really, really good.

It can do tonnes of stuff, voice recognition and working out sentiment, whether somebody’s happy or sad when they’re speaking.

Google Cloud also has a bunch of stuff with like translation and voice recognition too. Microsoft had their own version, I don’t understand it, but you can do more advanced stuff with that. That’s beyond me.

Amazon has their own as well.

I’ll have links to these slides later.

If you wanna go more in depth try TensorFlow. You might’ve heard that one before, you can pretty much build everything from scratch from machine learning.

Before finishing up, you know where you should go to? Web Directions AI, that’s a good idea, you should go to that.

It’s on September 28th.

Go on if this all sounds really interesting and you just wanna explore more of the business side of things, how could you use AI.

Also, I’ve got an article up on Dev Diner, three ways to learn about artificial intelligence from machine learning so you’ve got no excuse.

I’ve got links to articles, all the articles that I’ve mentioned will be up there saying this is where you go learn.

I also got a tutorial coming out, an online course on how to make your own virtual assistant like this. So if you wanna get involved, you can pre-register for entrance and tell me what you actually wanna do with your assistant, so that I can actually build that into the course as I go.

Remember the newsletter.

I’m also gonna post a whole series on sitepoint, so if you go to sitepoint and just be like sitepoint, Patrick, api.ai, it will tell you.

Email me, tweet at me, ask me questions, remember, don’t see this as this bar that’s too high, you can’t get there, it’s too hard to make artificial intelligence.

I quickly showed you a few ways, there are so many others. And you don’t have to understand most of the background of what’s going on there.

You know kind of what’s going on, and so you can just use pre-built services. That’s my email, so send me questions, ask. If you’re more of a Twitter person, my direct messages are always open to everybody so just message me and say hi.

Thank you! (bouncy, playful techno)