(energetic music) (applause) - Good morning everyone.

My talk, as was just said, is a bit higher level. So we're gonna have Amy in this morning, obviously, who's gonna talk about conversational UI and the actual real, nitty gritty of that interaction, and my talk is sort of a layer up from that to the broader experience of how we interact with those intelligent agents.

So, the things that I wanna cover today, I'm gonna whip through them, but I, I think because, unfortunately, Amy's sick, that I do have a bit of extra time, so I've stuck a bit of, a few video clips in there as well.

But I wanna talk about the inspiration for these types of devices.

I wanna talk about the initiation that we all went through at some point of potentially interacting with these devices. The interaction that we do have, and the broader, as I said, the broader user experience of the interactions that we have with these things. The identity of these agents themselves and then the impact that they're starting to have, and more looking at use cases or anecdotes from a broader audience than we might usually expect. And then the imminent future, what's actually happening in the short term that should be of interest to us.

So, the inspiration.

You know, for long in the history of science fiction, we've looked at how people talk to computers. So this is the Star Trek computer, for those of you who might not be Trekkies, in the 1960s.

So, visualisation of conversation and screen interaction. Of course, we all know HAL from the '70s.

And in the '80s, we sort of started to put bits together of the actual computers.

It wasn't necessarily science fiction, but we were still taking that technology and looking at its opportunities.

(classical music) (computer beeps) - [Computer] You have three messages.

Your graduate research team in Guatemala just checking in, Robert Jordan, a second semester junior requesting a second extension on his term paper and your mother reminding you about your fa-- - Surprise birthday party next Sunday.

- [Computer] Today, you have a faculty lunch at 12 o'clock, you need to take Kathy to the airport by 2:00, you have a lecture at 4:15 on deforestation in the Amazon rainforest.

- Right.

Let me see the lecture notes from last semester. No, that's not enough.

I need to review more recent literature.

Pull up all the new articles I haven't read. - Now, we wouldn't watch that in detail, but that was a concept video that Apple put out in the '80s which was called the Knowledge Navigator, looking at this fiction of the future.

And what we do is we create these designed fictions, and we've done some of these for Elasien and for Optus and so on, where we look at what the potential future could be like and extend what our knowledge is and the ways that we want to interact or expect that we want to interact.

So you can that that was quite some time ago, but that speech and interacting with these agents was something that we sort of wanted to do for a while. And then, of course, we've gone down that science fiction route in more recent times, as well. The movie Her.

Who's seen Her? I don't think I need to explain it to this, yep, great. But, essentially, that earpiece in Joaquin Phoenix's ear is his agent, and he ends up falling in love with his intelligent agent and then finds out that his intelligent agent is polyamorous.

So, that was quite interesting, and then there was Westworld as well.

- Results are, well, confusing.

- Tell me, what happened to your programme? - When we are born, we cry we are come to this great stage of fools.

- That is enough.

Tell me, do you have access to your previous configuration? - Yes.

- Access that please.

What is your name? - Mr. Peter Abernathy.

- Mr. Abernathy-- - Maybe we need a system like this for Caroline's system so we can interrogate them more easily.

But this is the type of thing where we have, and in that case, what we might call a social robot, essentially, and if you know Westworld there's all sorts of aspects to that, but where you can interrogate and interrogate the underlying system of these Intelligent Agents and find and go back in time and see what drives them. And, in fact, they also use during the show these tablet devices where they can look at the conversational flow and how that artificial intelligence is interacting with them, and they can actually change the metrics of that individual's personality to create different experiences.

So, that's sort of driven a lot of stuff, and then for most of us, that has been the realm of science fiction and then, quite some time ago now, eight years ago, or in 2011, Siri was the one that first sort of really brought it to the mainstream in terms of people interacting with a device using conversation to achieve tasks as an agent on behalf of that individual, getting it, the device, to do things.

So that was back in October 2011, and then since then, and I've structured this so for those of you who might not recognise, we've got Siri at the top, Google Assistant, Alexa and Cortana, and I'll talk a bit more about those shortly.

The iPhone 4S and then came out the Google Nexus with Google Now and then we went to the iPad, introduced Siri as well, then CarPlay and then the Android Wear and the watch from the third party, then Cortana. So you can see, back when Windows mobile was a thing, that the three major platforms, I'm not showing Blackberry here, but the three major mobile platforms at the time were all starting to play with this in the space of a couple of years.

Then, halfway through the timeline, really, we got the Amazon Echo and so this is where things really started to ratchet up and people integrated them into their home in a different way. And then CarPlay for Android, Apple Watch, of course with Siri, TV, Apple TV with the remote with Siri and then the Dot and we start to get these devices, different devices, so HoloLens, the Microsoft augmented reality experience has a way of interacting with Cortana so that you can talk to it at the same time you might be doing gestures or interacting things within an AR experience.

Then Google updated to Google Assistant, so enabled it with a lot more functionality and so on, brought out Google Home, then we had the Airpods, then we had all these devices that started to have cameras in them, the Look, the Show and the Spot from Amazon, the Pixel Buds that could translate, so it had Google Assistant built in, similar to the Airpods.

Of course, the Mini and then, I was trying to stay clear of third party ones because if you put third party on it just explodes, but I want to show the Microsoft Aura there with Cortana in the home, as well, through a smart speaker from Harman Kardon. And then the Google Max and then the Homepod and then the Google Show.

And then, of course, for those of you who watch this space, last month Amazon announced 12 products or 12 updates to their range of devices right through to a wall clock that has Amazon built in, Alexa built in, and a microwave so you can tell it to heat up water for 20 seconds or you can reorder your popcorn through that using an Amazon Dash type button and so on. So, they've really exploded trying to be in each place with Alexa.

Interesting thing is that they're not a first party client on any of the mobile devices or watches that people have with us most of the time, though they are in the car now with Android Auto. And what's of real interest to us is the incorporation of these devices into a whole experience for that customer. So, both Google and Apple and Amazon now have these ecosystems of devices where people should be able to interact with these devices almost anywhere they are, so in your home, on your wrist, on your phone, in your car, etc.

So, all of these essentially have a couple of things in common.

Of course there's a speech input, a microphone. It then goes in and is processed by the computer that runs an operating system, some of which we've mentioned, and of course it needs a network as well to get new information, to update those information services and then it outputs a speech as well in the devices that we've talked about.

The ones that we've got here, so Siri, these are the four main ones that we see out there in the market and, again, from the larger companies.

There is a whole raft of third party or additional Intelligent Agents from companies, and we'll touch on some of them a bit later. Who uses what? So who, and we'll frame it in the last three months, who has used Siri in the last three months? A couple, so maybe a third of the audience. Google Assistant? Whoa, there you go.

That's probably 3/4, 4/5 of the audience.

Alexa? Who's got Alexa devices, yeah? Probably about a third.

And Cortana? Any Microsoft users? No? (audience laughs) The one I didn't, on the device slide I didn't put computers and Cortana is in Windows 10, so it comes from more a computing platform. It's strength is there.

So, one of the first things, the first take-aways is, I suppose, really understand that Intelligent Agent ecosystem and what is out there and what your customers or the people integrating or interacting with your services are likely to be using in their whole of lives, and we'll talk about multi-device shortly.

There's the interaction so, I won't go into conversational UI in particular, but we talked about the microphone.

Well, now we can also use the camera on some devices. We can also use touch or gestures to initiate either interaction or to respond to requests from the device.

And, of course, on the output of, on the output side, we also have screens that display the information that we might request and also things like Haptics that are giving us feedback as we interact with these systems.

There's a whole bunch of aspects that would be a whole talk in itself around the actual interaction.

One of the big issues with these types of systems is how do people know that they're there? How do they know that they're available and which ones are available? So, what can start that interaction with the assistant. And then, as I mentioned before, this real initiation of that interaction is really, in my view, it's key to be a first party client.

I don't wanna open my mobile then go and find an app then start that app and then be able to start that interaction.

I usually wanna press something or just say something to the device, and a lot of devices, well iOS in particular, won't allow you to do that thing, that sort of thing as a third party service to act as a first party client. And then there's engagement, so the syntax, the syntax is quite different. When you are conversing with these systems what you tend to find is the Alexa skills are a bit more rigid, they're a bit more command-line-like in the way that you need to deliver that, whereas the assistant in Siri can be a bit more flexible. And then the other question that we've got to ask is moving forward, do we expect that our Intelligent Agents will act on our behalf? Will I ask Siri what the latest Optus prepaid plan is, or do I wanna have a conversation with Optus as its own brand directly? And so framing that in how the customer ends up experiencing it.

And then continued conversations is an element or an aspect where I don't need to keep reinitiating that experience, so with Google Assistant, for example, you can set it so that you might ask it a question like, "What's the weather today?" and then within eight seconds it will remain active so you don't have to say, "Okay, Google," again. You can just go straight into your next query, so it continues that, and it uses the previous information as reference so you can build on questions.

So, you could say," Who is the Prime Minister of Australia," and then the second question could be, "And when were they born," so you don't need to reiterate, "When was the Australian Prime Minister born." And then there's a whole bunch of issues that we experience with these conversational Uis, as well, which is around error correction, so often it won't hear you right or midway through the sentence you'll realise that you haven't phrased it in a way that this system might be able to accommodate, so you then get this error state and then you go back through.

For those of you who might use Siri, you can press and hold to engage, but you can just start talking so, obviously, "Hey," and I won't say the key word, "Do this for me," and in some cases it will respond with a beep and some people will wait for the beep, but you don't need to.

You can just have that ask that question straight through. There are also devices out there now like the Sonos, which is obviously a speaker system, which now run multiple of these operating systems, so a while in the future, thereabouts, they've announced that they currently support Alexa and that they will be, by the end of the year, apparently, supporting Google Assistant.

So, in that sort of situation, do you actually have two accounts? Are you engaging with different music sets via different services in different ways, and do you have to frame your query in different ways when you're interacting with these systems? So, while these systems might seem easy in terms of, I just have to talk to it, they can actually add cognitive load in terms of the framing of it, articulating which service I wanna use and so on. And then, of course, Facebook the other day announced Portal which is their new assistant device in the home, has a camera on it.

They've gone to great lengths to state that you can turn off the camera and the audio feed, and in fact, they give you a little plastic cover so you can put it over the camera.

But they've also introduced some really interesting functionality in that the camera can follow you around the room, so it knows, using computer vision, where you are in the room and can adjust that. If you turn on a blender in their ad, it will isolate that noise and just focus on your voice as well, so they're getting smarter in the way that they can identify who's the person they're conversing with, and if you're having a video conference call, then following you around the kitchen or around the home is really important.

But, of course, the question around the internet is what else can be tracked and do we trust Facebook with this information? Is there anyone from Facebook here? Okay.

So, my second take-away is really think about how the audience wants to interact with your service, through what types of devices and what types of capabilities do those devices have. Here are the phrases that you use to initiate with the devices.

Interestingly, Google have two and...

If you took that one out, it's really quite simple. Most of them are "Hey" whatever, with the exception of Alexa which I'll come back to shortly. OK Google. OK Google.

It's just quite a complex thing to say, and these are the little nuances that we have to think about.

Even, I've noticed, having multiple of these set up, it's much easier to say, "Alexa," than it is to say, "Hey, Siri," right? It's just in the number of words.

It's really straightforward.

And Alexa was actually chosen as the name because of the X provides a hard consonant. And there are other types of interactions, so one of the ones that Amazon have just introduced is the whisper, which is a great idea, right? If you're doing things like turning off the lights. We have Alexa in the bedroom and I use it to turn off the lights, so when I'm coming to bed, my wife might be asleep. I don't wanna go, "Hey, Alexa, turn off the lights." I can whisper to it and it will whisper back. In the case of lights, it doesn't need to because, obviously, the lights are off.

And the interesting thing to me, and I've never thought about whispering like this, but whispering is talking without using your vocal cords. And, play around with it yourself, but I've never realised, but when you whisper, you don't use your vocal cords but when you talk, you do.

And then you have the watch, so the Apple watch. And this is really interesting because there's actually now three ways to interact with the one device.

So, I can push the button on the side and hold it, just to initiate that and then I can release it and say the command. I can say, "Hey, Siri," with it in a raised mode. And recently, with the latest watch, you can actually just raise it and speak, and you don't even need to say, "Hey, Siri." So we're starting to get faster at that interaction. Unfortunately, I've found that interaction a bit problematic.

It doesn't always work.

There's a lot more, there's a lot of issues around it. And then I've focused in on the Apple ecosystem just to highlight some differences between the devices and the ways that you interact with them.

So there we've got, obviously, the Airpods, the watch the Apple TV remote, iPhone, iPad, Mac, CarPlay and the Homepod.

Now, all of them except the Homepod have a button that you can press to initiate the interaction.

With the Airpods, it's a double tap.

Sorry, it's a double tap on the right side. With the watch, it's a bit of a press and hold. With the remote, you actually have to press and hold the whole time you're giving the command, so you actually hold the button down.

With the iPhone, the iPad and CarPlay, it's simply a tap and same with the Mac, but the Homepod you don't interact with via touch. And, interestingly, one of the other impacts of a multi-device ecosystem is the relationship with devices and how they allocate the responsibility.

So when I'm at home, most of our devices are Apple devices, so I might say, "Hey, Siri," when I'm there in the kitchen and my iPhone will light up, and my iPad will light up, and my Homepod will light up, and they all have a little conversation between them, and they sort it out, and the command goes to the Homepod.

So the Homepod essentially takes over that interaction. The problem is that the Homepod is really limited in its functionality, at the moment, so it can't do a bunch of stuff that my iPhone and my iPad could do, so it's this sort of thing of I want to achieve a task, I say the right thing, but the task gets allocated to the wrong device. And I expect that that's something that will resolve over time, but it's certainly a disconcerting thing, at the moment. Also, sticking with Apple for a moment, they just came out with Siri Shortcuts in iOS 12, so there's ways that you can set up, essentially, voice macros.

So, I say, "Hey, Siri, bus home," and it comes up immediately with my route home and the next bus.

You can set up these whole ranges of things where you can now talk to it and it will go and execute a number of commands. So whether that's turning the lights on, playing the morning news, starting your music, whatever it might be, you can sequence these interactions with lights and everything like that.

So, not only does it say it, - [Siri] The next 393 bus to Little Bay departs in 30 minutes, followed by services approximately every 22 minutes.

- So I can do that in my ear pods.

I don't need a screen.

I can just listen to that if I'm walking down to the bus, or I can see it on my watch, whatever it might be. There's a lot of issues with Shortcuts at the moment. It's a very complex interface from a consumer perspective, and the functionality is quite limited as people start to integrate it, but it's gonna be interesting to watch.

So, 3) Can you integrate Siri Shortcuts or similar services into yours? And then the Identity of these devices.

So, you know, there's a whole range of aspects to this, and we'll talk about gender and physicality, but even things like culture and Power Distance Theory, all of Hofstede's work on how people relate to each other in different cultures.

So whether it's a more individualistic culture or a more collectivist culture, what is the language that they use when they talk to systems? A colleague of mine has been doing some work with Indigenous Australians out west where there's a lot of use of a shared device, so how does an Intelligent Agent maybe cope in those sorts of situations? Is it identifying an individual and responding to particular things, in which case that you need to set up multiple services and so on.

Accent can have a huge impact on our experience. It was really fascinating, we bought the Amazon Echo quite early from the US, we brought it over, and then at one point, Amazon said, "Hey, you have to move to an Australian account." And just that switch in the voice from that, what was an American accent, what was a British accent, actually, 'cause they're available in Britain, so I'd changed it to a British accent, to that Australian accent really sort of threw my son out for a while when he was asking to listen to music and things. And the services behind them as well, you know, what services are enabled currently, what functionality is enabled, can I listen to all the music I need to, can I look at my calendar, and obviously, the privacy aspect.

And one of the things I thought from Caroline's talk this morning, that Cortana says that it does, and I haven't investigated in depth, is it has a notebook feature.

And so the notebook feature in Cortana actually tells you about the information that is has captured about you, and you can go in and edit that and change that if you feel it's appropriate.

So at least you're a bit more aware of some of the content that the agent is making on your behalf.

These are the logos for all of them.

Interesting, they're all round.

There's very different levels of animation which sort of bring them to life and give them their own little personality. The names, so Siri, it came out of an organisation called SRI International before Apple bought them. Google Assistant I think is pretty obvious, and in fact, it's kind of a bit dry and clinical if we compare it to the others.

Alexa, as I mentioned, that hard consonant so it's much more likely to be recognised by the device, and that's why you can just use Alexa versus a "Hey" whatever.

And Cortana, which we'll come to shortly.

Interestingly, only Siri offers a male and a female version. All the rest are female only, and Microsoft have talked a bit about that and there's been a bunch of research, previously at Indiana University in '08, which found that human-computer interactions, a female voice is warmer.

And then Stanford did a bunch of research, very early on in the '90s, late '90s, but a female voice is better when hearing about love and relationships, but a male voice is preferred when learning about computers. What does that mean? So I think we need to update that research and get a bit more information around that. 'Cause I think, why would you restrict it? I can't understand why you would restrict entirely on gender.

Cortana is interesting in that it's the only one that really has a physical representation, and this is more a historical aspect to it that's probably only relevant or aware to a subset of the community, but who plays Halo? Yep.

So, Microsoft had a game called Halo by Bungie, and there was the main character was Master Chief and the AI for Master Chief was Cortana, so she has a physical character already.

And when they were doing the Intelligent Agent, they actually used Cortana more as a placeholder name just because it was from Halo, and then they were gonna change it and the whole use, well there was a big sort of petition online to use Cortana. Unfortunately, it does bring with it that character and, unfortunately, I wouldn't want to show a full picture of Cortana because there's, you know, she's a lady with not much clothing on. So that has certain associations and connotations and potential interactions as well.

I was over at Mobile HCI last month in Barcelona and there was a great talk by a guy called Matthew Aylett who runs CereProc, and he's written a wonderful paper that we had on the industry panel about how he designed two very different Intelligent Agent services for both the Sony Xperia platform, so these bits that fit in your ears, and then the Oakley Radar Pace glasses which have a voice coach, essentially, for training you when you're running and stuff. And it was fascinating to hear the two approaches by the two different companies from a financial perspective, from a personality perspective, from an emotional perspective that drove these two products that are quite similar but very different purposes. Well worth checking out.

So, what's the identity of your service and brand, and how do you wanna flavour that and convey that to the customer and be integrated into that product. And then there's the impact.

So, my Saturday morning is wandering around the house looking in the fridge and the cupboard and going, "Hey, Siri, add Corn Flakes to the shopping list," or whatever it might be.

My five-year-old son has great delight in now saying, "Add LEGO to the shopping list," so I have to go through and take out all the versions of LEGO that he adds to my shopping list and yoghourt and whatever he feels like at the time. So kids are finding this really an interesting interaction. We've had Google Home, Alexa and Siri, and it's been interesting to see how he's responded to each of them.

We bought a Google Home in the States and had it set up in my brother-in-law's while we were staying there for a period of time, and he became quite familiar with it 'cause we just happened to have it in the room that he was in.

He could request thing, music and things like that, and then when he came home I took it to the office, and he sort of missed it.

And then we had Alexa and then we introduced Siri at home, and this sort of, it's been interesting to watch the relationship.

Google Home have this service where you can tell it that you're reading a particular Disney book and as you read it, it will fill in around that with music for that particular part of the story or sound effects for that particular part of the story as you're reading to your child.

So it's a soundtrack to the book that you're reading along. And Amazon also have a kid word, sorry, a kid mode, and you can, in kid mode it will give you a little reward and appreciate if you say please and thank you because one of the things that's fascinating is how the structure of these commands can teach kids grammar, but is that a good way of phrasing things? Especially if they're then extrapolating that grammar to other conversations with people.

One of the, I was talking to the guys at Amazon, and one of the things that they found fascinating, what they've recently introduced is for the New Zealand Blind Society, the ability for those people, instead of having to have a Daisy Reader which is quite an expensive device, they can now have an Echo or a Dot or whatever it might be and they can go through and find out what magazines are available, and those magazines can be read to them or those books can be read to them.

They can find the book title by what author, so they have a realm of all sorts of information available to them.

Last year at Web Directions, I ran into a friend of mine who goes under ATTRIB, and many of you might know Steven, and we were talking about this sort of stuff and he mentioned that his brother-in-law had recently moved in with him.

His brother-in-law, unfortunately, has lost his sight over the last 15 years, so his brother-in-law's about in his early 50s, and has an intellectual impairment, a vision impairment now and a speech impediment because of his deafness as well, so he has this combination of factors.

And just before Web Directions last year he'd gone to move in with Steven, and Steven mentioned at the conference that one of the things that had happened was he'd used to live further south and with his parents and time had come that he had to move in with Steven and his wife.

And one of the things that Craig still did was now that he had this assistant, both Google Home and Siri on his phone, he was able to ask about the temperature back in the place that he used to live, and there was this little connection back to where he used to live.

He used to check the temperature for where he lived now and where he used to live in the past.

I caught up with Steven the other day and we had a great conversation about how Craig's usage has actually involved and really changed his life, so one of the things that he's able to do now is get, even though he's ninety-something percent blind, he can still see some light and make his way through the house, but now he can turn the lights on himself.

Okay, he doesn't have to find the switch.

He can do it using his voice.

He's also able to, you know, if the family's watching a TV show and he doesn't enjoy that, he might go to the bedroom and listen to his own music, and now, of course, he has access to whatever music he wants for his devices. He listens to a lot of country music, apparently, and one of the impacts that it's had is that he now also knows through these services, these voice services, when bands are coming to play in Canberra where they live. So, the end result is that Steven, his wife and his child actually go out to these music events, live music events now, with Craig, and it's changed their perspective 'cause they weren't going too much to live music events before, so it's really changed things around there. So that's just a brief touch on how these devices can really accommodate a whole range of people with different needs and abilities and interests. And then there's the imminent future, so what's coming up.

This came out the other day, a patent by Amazon looking at the conversation and being able to detect if you've got a cold or the flu. I don't know whether that's the level of interaction I want with my Echo and how they put it together. Mycroft is a project that's an open source AI that's quite interesting.

They've got their own devices now.

They're looking at how they can expand the platform. They had a bit of a Kickstarter thing, now they've got a bit of funding.

But how can they...

To me, is that problematic they can never be a first party on something like an iPhone or an Android device, or maybe they can with Android. And then there's social robots.

So, this is Jibo.

So, Jibo was launched by Cynthia at MIT and her team, and this is the next level where we give these Intelligent Agents physicality.

Interestingly, Jibo just went under because it wasn't supporting the services that people expected now from their Intelligent Agents. The music services weren't there, those other aspects that just weren't there, so customers now have an expectation of what these things can do.

But you can certainly see a personality coming through. And we're doing a lot of work here.

There's social robotics at Magic Lab at UTS. People like Meg Tompkin are doing some fascinating work there on how does it change people's interaction with an agent if they have a physical form.

I don't know whether you've been to the US, but they have these security robots at Westfield that just motor around, and it looks like a big bullet-shaped thing that's red and white.

And it's kind of scary.

It's sort of odd, and how do we relate to those things. At CHI last year, they had a lot of the telepresence robots, and the first day you went into the conference and there was about seven of the telepresence robots all charging up against the wall, and you sort of went, "Oh, yeah, they're great," and then they started to move around amongst the attendees. The second day, everybody must have turned up and started personalising those with the country they represented or who that individual was, so it added this extra social element to those robots which was fascinating.

And then there's Mica.

So, most of you are probably aware of Magic Leap. Mica is this interesting experience, so let's go and have a quick look at Mica. Hopefully.

Well, I've gotta do the survey, of course.

What do we want? None of the above, let's go for that.

- I wanna share a project at Magic Leap that started a few years ago when early hardware prototypes were emerging. We knew we had to push both in engineering and interactions and content.

It's a bit of a chicken and egg situation but both developments inform each other.

Our focus was to see how far we could push systems to create digital human representations.

Mica is our prototype.

Our benchmark for fidelity is an offline, high-resolution representation.

Lower-res versions attempt to achieve this level even when run in real-time.

Nearly two years ago, we had interesting progress in Mica's development.

After focusing on realistic eye gaze, eye movement and gaze, we set up Mica on our current prototype.

AI components were then added to track the-- - [Oliver] Now, what I'll show you just, and I think it'll be at the start-- - I wanna share a project at Ma-- - No.

That's the animation bit of Mica, but the other aspect is when in, because this is an augmented reality experience where Mica can be in the room.

When does Mica turn up as your agent that you want to converse with, especially if you've got other people that you're chatting to.

When does she appear, when does she disappear, what do you ask her and those sorts of things. So that's sort of the next level, I suppose, on those aspects.

So, the take-aways I've listed out there.

Understand the Intelligent Agent ecosystem, so within the ecosystem and between the different ecosystems.

Are you expecting your customers to be across multiple ecosystems? How do people want to interact with you, your brand, the service, the device? Certainly, after all those years in mobile, working in mobile, I think sometimes we get too focused on our little aspect of the thing that we're designing, forgetting about the larger context of use in that user experience.

What can you already do with Android and iOS, with Siri Shortcuts? This, the identity of the brand.

And how do we broaden the range of things that these services might interact with? So, thank you for your time this morning.

(applause) (energetic music)