(upbeat music) - Thanks for having me.

I started a company called Ikkiworks and as you can see, its a clinical social robot startup company.

And a bit about myself.

I'm an industrial designer, so sort of not in on the details of AI and working with it for the first time in this project here.

So, I guess what I have to show you guys today is running through our introductory video.

It shows you what we are about and trying to achieve. Introduce you to the team.

So there's two other guys and myself, and then from a really, literally, nuts and bolts perspective for industrial design, the journey to date starting from what you'll see is a very crude beginning and kind of outlining how, from that first kind of self-funded prototype, we're now reassessing and reevaluating where we are to commercialise the product and cross-referencing a bit of the viability and a bit of the technical package, and bringing AI into the fold to sort of move forward. Let's see how we go with the video but this is our general introductory video. (sweeping tones) (robot squeaking) (uplifting music) - [Man] Ikki is actually a very important forerunner of a really exciting digital revolution in medicine so this is a very exciting project.

At any time, we have about 150 children on treatment. The treatment programmes run for maybe six months to two years.

We spend most of our time supporting the parents and working out what doses of chemotherapy to give the patients and all the technical aspects of it. But the children, the patients themselves seem to somehow fall through the cracks in all of that, and so we realised that Ikki could play a very valuable role in helping the patients and giving the patients some power, some empowerment.

Ikki is small, portable, very cheap, and can go home with the child and stay with that child all the way through treatment. And we will be able to download clinically-relevant information from Ikki when the child comes in with Ikki to the clinic. The Ikki team have actually delivered something quite remarkable.

Ikki has a number of pretty amazing capabilities. It reminds the family that the medication's due each day. It can actually register that it's the correct medicine, and if its not the correct medicine, it'll flash a red light and warn the family, no, you're not, that's not right.

The other one.

It warns the family if the child's getting a temperature. We need to know if any of our patients get a temperature right away because that can be the start of a life-threatening episode of septicemia which can be fatal.

The parent can read a story or sing a song into it, and Ikki will sing it to the child on cue.

It talks back to the child if the child talks to Ikki, Ikki will talk back in Ikki Ikki language which is very funny but its also very satisfying for the child to have a conversation with Ikki. The child can smack Ikki and it'll go, ow, and we think that's a really clever outlet for the child to let off some steam and some frustration about the situation that they find themselves in.

The potential for this concept is just huge. We're very happy and proud to be clinical partners in this development, and we wish Ikki every success. - [Nurses] Bye.

- I'm gonna be back.

(nurses laughing) Well I hope I have an Ikki when I finally get wheeled into a nursing home.

I want one of these with me, please.

(audience applauds) - Yeah, so big credit to Micahel Stevens at Westmead and the guys out there for having their input. I think I've lost the.

I might need some technical help here.

Here we go.

I think we're back on track.

So, basically, Ikki started out as basically a Lunchbox with some ping pong balls and shopping bags initially. That's credit to Clive, one of the founding guys. The partnership with Westmead and the vision to try to improve patient outcomes with kids for a broader clinical interface was Colin's credit. That's how we came to be where we are now, or the beginnings of, and so early days we didn't really know what Ikki was going to be. We obviously started with a lunchbox and this is some of the initial stuff that we talked to Westmead and tried to tease out of them. What would be of value and of use to them and introduce them to the idea of a robot and all of these other aspects so it kinda goes back to what someone else is saying before about having the domain experience or that knowledge and engaging with that thoroughly and letting that innovation come out of the real-life need. So then, part of that and nuts and bolts now from an industrial design point of view, was trying to work out how to take that lunch box which had been designed under the premise that nothing can cost more than $5.

So how to turn a $5 lunch box into something that could go and help out a number of people. The pretty early go-to is the things like the Furby or Tamagotchi and basically a teddy bear. That's sort of in their domain of familiar objects, and the age range for Ikki at the moment in a paediatric setting is two to seven.

Initially, we didn't know what it was and we've kinda been fumbling around finding what it was, so stuff like infant products were kind of on the radar as well, but looking at stuff like materiality, cleanability, personality in it and the character of sorts, and then kinda coming up to now, or a year or two ago, looking at what and how these other products that are in the ecosystem of everyone's everyday life kind of look and feel from a design point of view, and then more relevant stuff again in that sorta setting, baby monitors and stuff.

So just sorta from a design point of view, a lot of cues around how it can look and feel that I kind of have picked up from these product examples. So then, based on that, early on, there's a number of sketches, so embracing hard shell egg style-looking things, kind of borrowing a Tamagotchi metaphor, and then everything from embracing the plastic bags to expanding ease on cactusy-tree things, to something like a penguin or a platypus or some other kind of unique character.

I think what we did early on was try to create a physical character that was unique and new and thoroughly embraced the setting of a clinical setting and of a robotic setting, so it wasn't literally a humanoid robot.

It wasn't a teddy bear with some stuff in it. It was a new thing.

And then this is a really early quick, first pass 3D model or sketch exploring the idea of touch points around the belly area, being in a different, softer material.

So having a physical tactility to the product, having a visual dynamic aspect in colour and finish that then, from there, jumping in to the more nuts and bolts end of product design into another programme turning into plastic shells and putting in off-the-shelf componentry such as Raspberry Pi in this generation, so raising the startup flag, we've built it, thrown it apart, broke it, redoing it, doing it again, do it differently, and we're kind of constantly evolving and changing how it's made, but this is a bit of a snapshot as to some of the steps along the way.

And then, based on the prior image, having another look at what it looks like in the flesh, in CAD Renders, taking that back to Westmead, talking with them.

We did a few form studies around how big the product was. They tuned it larger, smaller, skinnier, fatter, and then this is mapping out in crude terms, the user functionality of how the different functions work. So just from my perspective, being at the early stage from a design point of view, this is sort of a go-to way to map out what's going on and how, and whether that turned into the code, I don't honestly know 100%, but this is that high-level guideline as to how we want it to be functioning, and then glossy images of how the app can work and stuff like that, so bouncing that off the new guys and app guys to bring that moving forwards. So then, literally making it.

So laser cutting bits and pieces, 3D printing different components, casting silicons and rubbers.

It's been sort of fairly hands on and low-budget, exceptionally low-budget.

So, yeah, 3D printing in fairly low-res stuff and bolting it together and seeing how it works and breaks, and then yeah, then taking it through Westmead, engaging Chelsea here on the right.

It has been really useful when getting feedback, and that's kind of been a weird situation, because you spend a lot of time designing and making these things, and then getting feedback, and just the fact that it's a physical thing that moves and that you can throw around and hold actually had such a value in that entire context, which is kind of humbling to see, I guess.

So, yeah, moving to the next gen, what we're looking to do and the things that we're doing throughout, so sort of looking at social robots and the marketplace of them. At the moment, we're looking to find where we land exactly, and I've kind of loosely categorised them into toys.

The static ones, the floor-moving robots, and the dynamic robots, and I think this is sort of where science fiction leads capitalism astray sometimes, and some companies don't make it or some do and vice versa, but yeah, we're kind of finding where it needs to fit moving further into other markets after paediatric to kind of become a viable business, and then, yeah, this is typical knock down, rebuild-style stuff of what the core and the guts of the product are.

So looking at, in the top left, the different functions of the product being a medical device, how AI can still engage and interact because it's a regulated compliance-driven industry. There's an importance to make sure that what we're delivering is reliable and predictable, so understanding how we can frame and integrate AI has been an interesting curve, but I think we have narrowed down the way that we can do that. Yeah, moving forwards, looking to now, engage AI further to develop specifically the language. We started off with a pretty limited set of sounds and squeaks, and that in effect, sort of starts to layer up what our value proposition is, in that it develops a personality for Ikki which is unique which then drives the engagement for the child, so it's a new product, and it's hard to just force things on people, especially in paediatric oncology, everything's forced on the child.

They don't choose to go and get a bone marrow transplant. They have to, so by engaging the child, we can then give them a sense of empowerment and responsibility over their own care.

So I think for us, the way that we're looking at AI is to try and value add the improved outcome at the end of the day so that it doesn't get left on the shelf.

It sort of becomes a real personality and something that they wanna enjoy similar to a pet or a Tamagotchi, they can kind of live and go through that treatment with them as a friend.

So I think now I'm handing over to Alistair to give you an idea of what AI looks like in this context a little bit, and how we're approaching this next gen product and the way that we're going to develop it forward, so thanks for your time, and that's a bit of the background.

(audience clapping) - Afternoon, everyone.

Thanks, Seaton.

That was fantastic.

So just to give you a little bit of background on ourselves, I'm one of the founders of Remi AI, an artificial intelligence firm established about five years ago in Sydney and over in San Fran as well with a heavy research focus and a strong focus on reinforcement learning as a sub-branch or sub-domain of artificial intelligence, and it was really exciting to, probably about a year ago, to have Seaton drop me an E-mail saying, hey, would you like to catch up and chat about this project? It's, to start with, it's probably one of the most meaningful projects we've had in our company history, and I've got about a team of 15, and we only had a meeting about two weeks ago where the entire team raised the fact that each of them wanted to work on it, and could we establish a roster.

(audience laughing) Yeah, if we give you a week each, it carries us a quarter of the way through next year, but to not go too deep into the research but just to give you a quick overview, it really is a researcher's dream, because it's a project dealing with real world problems and really trying to make a difference.

It's also dealing with really interesting data sets and inputs that are being collected from the child and the user, and it's also really pushing into some cool AI research around language and how to engage.

So, Ikki and Seaton came to us with the first brief of developing an onboard language AI, a language that is unique to Ikki and not anything like Siri or Cortana, no human recognisable language, but something that still becomes very familiar and very engaging for the child, and it has some very interesting constraints around the project, too.

The standard NLP functionality and the neural networks that are trained up by the likes of Google, not very good at identifying children and children's language, and so we have basically a whole new data set that we're working with and a whole lot of really interesting inputs, and a whole lot of interesting noises that are creating very different models.

I don't know whether, how many of you are familiar with, say the likes of Appen, which is an Australian company doing quite well, and they supply all the data sets of speech to the big guys like Google, Facebook, and the others, and Appen literally pay people, they give these people a script, and they ask them to go drive around the countryside of Ireland and middle of Outback in Australia and read the script, and then they have a transcript of what was said, and they have the voice recording, and then they sell, then, onto the likes of Google and Microsoft and Amazon and they use that to train up their neural networks, but Appen and the other competitors in that space are not doing anything with children, (laughing) so it is a very new and interesting domain, and the other thing that is quite complex is also that you've got children's privacy to be dealing with here, so everything really has to be onboard processing.

We can't be pushing it off to a server.

We can't be saving it.

It needs to come in, let the AI react to it, and then it's gone.

So, some of the signals we're pulling into our AI model is through all the microphone and that we have, as Seaton said, there's a couple of different iterations and a couple of different microphones, but the current version we're working with is a phone 'cause it just has a lot of built-in features, but yeah, we're utilising speech and all the other noises that a child might make.

We're also using accelerometer and gyroscope so we can understand if Ikki's being knocked around or being held upside down, and pairing that with speech has been a really interesting process so far. We also have the actual hard impact on Ikki, so it's still being pulled through the same accelerometer and gyroscope, but we can detect a hard whack, and so we've been having a bit of fun with that space of letting the AI try different responses, and I think taking that into production's gonna be really, really interesting in order to identify when the child is actually in distress and upset, and if it is, some children, I think, might whack the Ikki out of a sense of trying to calm themselves down and a sense of frustration, and then we also have child's temperature and some of the other read outs as well, so it's been a very interesting learning curve for us getting into the medical device space and realising all the different approvals that we are very much steering clear of 'cause it's an AI system, and you really don't wanna bring AI across into a medical device actually governing the medical device.

You want that to be very much a hard coded solution, but we can still pull the information that's being derived from the medical device of Ikki and then pull that into our AI to help us encourage certain behaviours within the AI. And so the general approach, we're very much researched-focused company, and we do a lot in what is called reinforcement learning.

Reinforcement learning essentially comes from inspiration from the dopamine system.

The mammalian dopamine system, the human dopamine system, and it's around motivating an agent through rewards and then letting that agent or that mammal or human basically navigate through an environment, navigate through their space, and that's essentially trying to increase the total number of rewards and avoid punishment, so it's sort of like the idea of getting up out of bed and going and having breakfast, and hunger motivating you to go and eat.

And so it's been really exciting from a research perspective to be trying to apply a whole lot of reinforcement learning in this language module. A lot of the current production-level AIs like Siri and Cortana and Alexa, they have aspects of reinforcement learning, but they don't have reinforcement learning governing the system, and it's really, really cool because we're dealing with a lot less developed language that we essentially have the freedom to use RL and let the reinforcement learning make mistakes. Adults are not so forgiving when Siri responds in a really stupid way.

And so we're taking, essentially, noise or when we identify words as inputs, and then feeding that into a couple of different models, actually, including sequence-to-sequence model, which then outputs what it thinks is the most likely and most suitable response given that input, and just to go a little bit further into reinforcement learning, it is really around putting the agent in a singular environment that it initially knows nothing about, and you look at the top guys up on the left, up right there.

There's a little lighter which is a red line that you'll occasionally see, and the reinforcement learner is motivated, essentially, to move from left or right, and we also, in this context, the reinforcement learner is allowed to modify its own legs and lighter and size of its body in order to help it find an optimal design to get across obstacles, and you can see, on the top left, that little agent is the standard agent, and he's doing really well. He's already learned how to walk.

Initially he had no idea how to walk, and is now quite well, reasonably capable, and then we let him modify himself, and the guy on the right, he's basically granted himself one giant leg.

We don't understand why, but it's very optimal for the obstacles that it's facing, and he is able to clear a lot of the taller stuff, and with a lot less effort, and then the bottom guy, we reward him for using the least amount of energy. All the obstacles are being cleared out of the course, but you can see, he's designed himself to have tiny legs, and he's very efficiently walking across that space. But what's really, really cool about reinforcement learning is it can be applied to so many different problems. It can be applied to web design and optimising web page layout.

It can be applied to trading crypto-currencies. It can be applied to making these silly dudes and it can be applied to language development. It's still very early days of language development within reinforcement learning, but again, I think that's why this Ikki project's so exciting, is it's a very, very elementary language that we're dealing with as inputs, and it's providing very encouraging results in a very encouraging research domain.

So within this context, the three motivating factors that we've been focusing on, and these won't be the only three, but they're a great starting point, is engaging, encouraging, and motivating, and rewarding the AI for engaging with the child, and so within that, for example, the amazon Payton, having multi-stage conversation is still a very complex AI challenge, but this is about rewarding the AI if it responds to its child and the child making a noise, and then Ikki responds with a certain noise, and then the child responds again, that received second response is rewarded, especially if there's some physical movement as well, and then if we get another one and another one, we're increasing the reward and increasing the reward, and very early days, this has only been tested with adults, as in, we're testing with ourselves, but we find it very engaging when Ikki makes quite different tonal changes each conversational step, and that's been a really fascinating thing to look at.

Second one is being unobtrusive, and this is really complex, and we have not solved this in the slightest.

It came up out of, our lead researcher over in the U.S., he's got a two-year-old child, and the child was given a really obnoxious train toy by a friend, and the train toy essentially is programmed, hard coded to attract as much attention as possible if it feels that the child is playing with it and leaves it alone, it essentially starts calling out for 15-second intervals, like play with me, play with me, play with me, and our lead researcher is getting so frustrated, he's thrown the toy out because his child is still learning word associations, and so he was playing with the train toy about four weeks ago, and he stopped playing with it, and Shahrad, the lead researcher was like, could you please turn that off, and the child's only association with turning off, he went over to the light switch and turned off the lights, and so Shahrad was in the dark with this stupid train toy still going, and we've sort of been trying to take that on board of enough is enough, and you do wanna encourage engagement, but you don't wanna be annoying, and that, I think, we've got a long way to go along there, and then the last one is rewarding around developing regular expressions, so this is more Pavlovian in concept, but it's the idea of if a child is starting to make really regular noises or really regular statements to Ikki that seem to be quite Ikki-focused, let's develop a call response, and that's been very successful so far and really interesting domain.

And just to wrap things up, the early learnings that we've been picking up is that timing is a fascinating problem in language development. Within a lot of the standard conversational systems, basically there's a long pause, and all that is programmed in whereas we're trying, with the RL system, we're letting the AI decide when it should respond, and that's one of the things that's self-governing, and that is proving to be actually quite an interesting challenge, so it's essentially a whole AI that's determining when is the best time to respond and then seeing what timing effects response rates from the user, and the other one which is something that we've done quite a bit of research into in the past, in a couple of research projects is that language within an RL context, sorry, reinforcement learning context, is a continually evolving and continually changing experience.

So one of our earlier projects, probably about three and a bit years ago, was around putting agents in an environment, AI agents in an environment where there was a food source and there was also threats as well, and we were essentially giving a collective reward so if one of the friends was attacked by a predator or if one of the friends discovered food, all the agents would be punished if one of the friends was hurt and if everyone found food, then everyone would get a little bit more reward. Mainly, if the agent by itself got food, it got the most reward for that, just like we do, but bringing more social and collective reward was a very interesting research project, and we very quickly found we gave the agents the ability to move around the environment, the ability to eat, and the ability to run, but we also gave it a wide range of syllables, and they very quickly developed their own language system to communicate with each other, and bringing exactly the same concepts across, and this is where Ikki, I think, will end up being quite heavily personalised, is that children and reinforcement learning systems, I think, when engaging, will really start to develop their own unique personality and their own unique form of communication, and that, I think, is gonna be quite special to watch as it goes along.

So, yeah, so massive thanks for having us, Jon, and for Web Directions team, and.

- [Man] That's awesome.

A round of applause, please.

(audience clapping) (upbeat music)