If 2016 was the year that apps became conversational, it’s been a long time coming. As messaging apps eclipse conventional social networks and our environment is increasingly hooked up with the internet, new opportunities arise to create increasingly personalised experiences for individuals, groups, and teams, as well as lessons to take from the slow demise of the PC. Additionally, several new messaging and voice platforms have emerged, putting us on the precipice overlooking a broad shift in how technology is designed and serves people. We’re seeing new hardware and embedded technologies emerge that spell new paradigms for user experience, voice experience, and conversation experience.
With all of these changes seemingly happening at once, there are considerable questions confronting businesses about how, when, why, and even if they should get involved. Join Chris Messina as he guides you through why this is happening now and how to participate and evaluate whether joining the conversational product revolution makes sense for your business.
The death of the PC fits into a much broader cultural context. It’s a chance to leave behind some old ideas that have outlived their time, a chance to forget about things that hold us back.
First – how do you define a personal computer? How would you describe a computer to someone who’d never seen one? In 1983 Steve Jobs talked about a new machine with electrons moving around instead of pistons moving around… using the combustion engine to explain the new idea to a room of designers.
Computers deal in pure information, allowing calculation at great speed – far higher than any machine that had gone before. But as the era of the electron comes to a close, many of our assumptions are wrong about what comes next.
In modern computing it’s less about the environment and more about the apps. Our experience changes.
Look at the impact of TV on sport. The instant replay turned sport broadcasting into a form of analysis, not just a linear broadcast. Time was no longer linear and unreachable. If you missed a moment, you could look up and see it – from different angles.
In the same era (the 60s), the cold war created the notion of an information war. You didn’t just need bombs, you needed to know where to drop them. Spy planes gathered a great deal of information, which needed to be shared between military sites. ARPAnet was the precursor the web.
This is the era in which Doug Englebart delivered The Mother Of All Demos. His report was entitled “augmenting human intellect” – making humans more capable of dealing with large amounts of data quickly. The information and systems were designed for highly-trained military and academic specialists. Most computers were thin clients connecting to a large, shared server or mainframe.
It was the 70s and 80s that brought in the idea of self-contained/independent computers for the general public – or at least in business. Many common UI paradigms like “folders” persist from this push to normalise computers in a paper-based workplace. This persisted for an unusually long time.
It was 2007 when Apple popularised the next big leap in UI – the small touch-screen device… the iPhone. It broke away from other smartphones with their tiny fixed keyboards; and went back to the old idea of a variable screen and a pointing device. A screen, and your finger. Ten years later this is ubiquitous. Children learn it during infancy.
The massive rise of social media and messaging came along at this point.
Now in 2017, with the Next Billion coming online, what will the first device be for people as they get online? A PC? A mobile? Ambient voice-controlled devices?
We need to look at apps. Apps pushed businesses to create APIs, to understand their own business in terms of API endpoints. Instead of connecting a thin client back to a mainframe, we connect a heavier client to a data endpoint.
Voice control and voice processing are on the rise. Computers will have better speech processing capability than humans.
A side effect of voice search is that Alexa users are not specifying brands, or going to Google to do their searching. It’s not “Alexa, Engergizer rechargeable batteries” it’s “Alexa, rechargeable batteries”.
The importance now is to have a device present at the moment of desire, the moment someone wants something. This is why we have Apple airpods, Pixel buds, etc. Companies are trying to literally get into our heads.
Steve Jobs once described the need for difference Apple devices due to “a difference of emphasis”. Cars, trucks and sports cars share many similarities (engine, wheels, seats) but their purpose gives them a different emphasis.
We have devices now that we touch and talk to; or they listen and watch us. Amazon Look takes a selfie and analyses your outfit…
Now let’s take a little leap, forget a few things, blur our vision a moment.
Children have greater access to mobile devices and spend more time on them than ever before. What does that do to us? …so how many of you have seen the movie “Her”? (plays a clip from Her where the protagonist’s niece meets his virtual girlfriend)
Kids are growing up experiencing relatives through 2D screens (video chat). They are used to talking and interacting to people through screens. So… what does this mean for human connection? When AI gets sufficiently advanced, how do people truly know that grandma is real?
Kids are growing up with all of this and they’re adapting to it. They communicate in ways we don’t really get.
We have the rise of virtual celebrities. Miquela is big on Instagram, despite not being real. The people commenting don’t seem to know or care that she doesn’t exist. Even if we find it odd at first, this sort of tech will also create new art forms like emoji kareoke.
So what about the responsibility of creators? Making CGI so good it’s undetectable means we can’t trust any image. How do we convince someone an image is real when they know it can be easily faked?
In the early years of the web, we tended to believe that simply connecting everyone in the world would create understanding and harmony. It turns out that’s not true.
Are we really prepared for what comes next?
“We shape our tools and thereafter our tools shape us.” – Marshall McCluhan
The next generation is much more open, more malleable, more plastic. They bring less assumptions, they are more open to what comes next.
There is hope for us yet.
(light music) – I’m very excited to be here today.
It’s been about five years since I’ve been in Sydney. It is a long flight, for those of you who’ve done it. And it’s been quite a whirlwind.
I mean, everything that John said brings back a lot of nostalgia for me.
And it’s funny ’cause today, I’m obviously gonna be talking about the future as well as the past.
But it, I think, ties into some of the themes that John brought up around the opportunity and the privilege and the responsibility that we as makers of the future have to determine where this goes next.
And so, you know, the death of the PC, I think, really sort of an understanding how this fits in to a broader cultural narrative in context requires, I guess, provides us with an opportunity to leave behind outmoded ideas, ideas that have gone stale. It gives us the opportunity to think fresh and think new. In some ways, the way that I thought about ideas being adopted, it turns out that it’s easy to change technology. It’s harder to change behaviour and people’s minds. You can kind of like wait for a generation to die off. You can nudge them in a direction as near as I might or you can rely on the next generation of kids to be the standard bearers for what you think the world should look like.
So in as much as the death of the PC provides, at least in a narrative form, a point for us to look at and say, “Okay.
“What’s come before? “What happens next?” It also gives us an opportunity to forget about a lot of things that maybe are holding us back through convention and a lack of creative inspiration. So, to start off, I think it’s really important for us to actually define what it is that we’re talking about. And so if you were tasked with describing a computer to someone that’s never seen or heard one before heard of one or interacted with one before, how might you do that? Well, it it was 1993 and you’re Steve Jobs, this is how you might describe it.
– [Steve] Let’s start off with what is a computer? What is a computer? It’s really simple. It’s just a simple machine. But it is a new type of machine.
The gear, the pistons had been replaced with electrons. How many of you have seen an electron? That’s the problem with computers is that you can’t get your hands on the actual things that are moving around. – Alright, how many of you guys have seen an electron? (audience laughs) Yeah, me neither.
And so, this is in 1983.
He’s at the Aspen Design Conference.
He’s speaking to a room full of designers who have never seen a computer before and trying to help them understand what this contraption is that he’s bringing to the world.
And he’s trying to make the idea relatable by relating it to a physical concept, specifically the idea of the combustion engine, right? The combustion engine, of course, has driven generation of innovation, and of enabling mankind to distribute the abundance that it’s been able to produce through mastery over the engine.
It allowed us to sprawl out, to explore the world, to move resources from one place to another efficiently, and to expand the physical capabilities of the human body in ways that had never been done before.
Now, in contrast, the PC of course is powered by electrons, electrons that are moving and whizzing around at who knows ridiculous speeds.
Pure information going from one place to another, allowing millions of information calculations to be performed per second.
Now, if you can thrust these to frameworks and you think about the way that these relate, there are profound implications for how we think of ourselves in reference to the rest of the world and how we shape our environment around us. So essentially, in the era of the combustion engine, we were able to minimise the amount of physical repetitive tasks that we need to do, which changed most people’s everyday experiences. In the era of the electron, we’ve been learning to use the computer to process and understand in many plate vast amounts of information at a speed never before possible.
But as we conclude a chapter in the era of mastery over the electron, our relationship to the media environment and therefore to each other is changing, faster that a lot of us can even anticipate and understand. These changes have been afoot actually for quite a while. How many of you guys are familiar with…
This is actually my copy.
If you guys know this book, Understanding Media… Okay, if you haven’t read it or you haven’t looked into it, I recommend you do.
It was written by Marshall McLuhan.
He’s a Canadian media theorist, thinker, writer, speaker. He’s very quippy, so he’s very readable.
But he’s thought a lot about the media environment in which man finds himself.
You may know him as the guy who gave us this quote: “The medium is the message,” which apparently was first originally mistyped as the medium is the mess age, which is true. Now, he was active in the late ’60s, ’70s, mostly thinking about radio and television. And he thought a lot about the ways in which this environment affected us.
So I’m gonna play a clip from the ’70s that was actually recorded here in Sydney at the ABC, I think, radio national show, where he’s responding to an audience question about this concept.
– If the medium is the message, and it doesn’t matter what we say on TV, why are we all here tonight, and why am I asking this question? (laughter) – I didn’t say it didn’t matter what to ask on TV. I said that the effect of TV, the message of TV, is quite independent of the programme.
That is, there’s a huge technology involved in TV, which surrounds you physically, and the effect of the huge service environment on you, personally, is vast.
The effect of the programme is incidental.
– Now, if you were to relate that to today, it’s less about the programme, and it’s more about the apps. The apps define a lot of the ways in which we think and experience the world.
So he believe that television, this new media format, affected the shape of our thinking and therefore determined our experience of consciousness. And to give you an example of that, consider how television changed sports and time. Prior to the instant replay, everyone experienced time as a linear sequence, as a progression, as in, you wake up in the morning and you go to bed at night, there’s no way to go back in time.
The instant replay on television changed that. Suddenly now, you could actually experience time in multiple directions.
It wasn’t just a linear sequence.
Or if it was, it wasn’t one that had to go forward; it could also go backwards.
Now, as a result of this, sports became analytical rather than experiential.
You didn’t just have to go to the game to experience it. There was now a whole raft of commentators and other people who are talking about the specific plays and how they played out that allowed people to think very, very differently about sports.
In this era, a lot of the media companies were broadcast based.
Essentially, they publish content from one to many. What that allowed for was there to be these receivers, televisions and radios, that would get this information, people would receive it, and then they would propagate that information to their friends.
This allowed for a much more coherent media environment, where everyone knew roughly the same amount of information or the same kinds of information.
And it’s important to keep this in mind, as we consider the context in which computing grew up. This is an era where the future of everything, frankly, was being considered and reconsidered, where the foundations of the free world were being questioned, whether it was gonna be capitalism or communism. This was, of course, the era of the ’60s.
Nixon was in office.
This is the height of the Cold War.
This is a very tense and confusing time.
This is the first time in human history where it wasn’t just the brawn of your balms that would determine whether or not you actually had strategic advantage, but whether or not you had information that the enemy didn’t know that you had, or you had that they didn’t have.
So this is the sort of dawn of the information warfare era, and you can see now, in this period, in this year even, you’ve seen escalation of the technology involved in that. So that period explains why we had the OXCART, and that’s why somebody won in Spy Plane Programmes. Essentially, these aeroplanes could fly above Soviet Russia radar, spy on our enemies, and return information back to United States.
What was important was that when that information was returned, it could be shared and disseminated in a way that allowed the military practitioners to learn from and use that information for advantage. In order to do that, they needed a network that could withstand an atom bomb, and that technology was the ARPANET, the predecessor to the Internet.
Now, the computers at the time were designed to be used by experts.
So essentially, experts were trained on how to use these machines.
They were complex.
They required a lot of electrical engineering knowledge. But there was one person, or actually probably a team of people at SRI, do you guys know who this person is? Okay, as people who live in the web, you guys should. He’s very important.
His name is Doug Engelbart.
He is the guy that gave us things like the mouse. But back in this era, he was thinking about the ways in which the computer could actually augment human intellect and wrote report on this.
Now, I think one thing that’s really interesting about this is that this reveals something.
Do you guys know who the first PC in Silicon Valley was? Okay, if you look closely at this, you’ll see that there’s a “CONTRACT AF.” It was the Air Force.
The Air Force was the first venture capitalist in Silicon Valley.
They were the ones who were paying for the technology that allows us to do the work that we do today. So after producing this report, Engelbart needed a way of sharing these ideas with the world and produced an event at Stanford on December 9th, 1968, which became known as the mother of all demos. And if you go to YouTube, you can actually find the entire thing.
It’s about an hour long, definitely past the job’s yen mark.
But what’s important is that in this demo, most, if not all of the technologies that we take for granted today, has been part of our computer experience actually were invented or demonstrated back then. Some of the technologies that were shown then still haven’t even eclipsed what was produced back then. Now, Engelbart and his crew were of course designing for experts, like his colleague Bill Evans.
Now, you’ll notice something interesting about this computer, right? You see the mouse, a keyboard, and then you see this other weird thing over here. That’s what’s called the corded keyboard.
So if you’ve ever seen an organ, let’s say, in a church that you can like push down foot pedals, well, that’s essentially what that was doing. And so you now could use multiple implets.
The task bar is probably a poor man’s example of this technology today, right? So this is an era where highly trained technicians who really understood how to take up a part and put back together, this computers were using the stuff.
And oftentimes, programme indirectly.
This is in a service of military and academic purposes. Now this started to change in the late ’70s when this guy, this sexy man came along (laughs).
I don’t know how his PR people let him take this photo. But wanted to take these advances that had been happening in those worlds and bring them into the office environment, which basically looks like this.
Now, the amazing thing about Windows and Microsoft products in general was that it, too, lived in a context where it had to absorb the language of the era to make the power of computing accessible.
So lot of metaphors that are still with us today came out of the office environment.
Your folders, right? Which are, I don’t know, if you guys have ever lived in a world where you actually have to put paper in these things? Well, that’s what they use to have to do back in those days. A desktop is literally like the top of a desk. You had programme groups and executables, which of sort of like executives except maybe you shoot them.
Anyways, so if you can imagine the way in which television changed sports and the experience of time, just imagine the way in which computing changed the process of processing information and the type of work the people did day to day. Now, that device that I just showed you was sort of considered a personal computer. But that was only relative to the previous era where of course there was time sharing, and everyone shared large room-sized computers. It wasn’t really until 2007 of course where the advent of the iPhone that we really got a personal device.
Now think about that.
This was 10 years ago. 10 years ago.
This is in June, right? And August, I proposed the hashtag.
That’s how long it’s been.
Now, I know you guys have seen the launch video for the iPhone, but its pretty remarkable, because what it does is it sets up the next, I guess now for us, the last 10 years of innovation. This clip’s a little long, but I’m gonna play it. Just pay attention.
– What we want to do is make a leapfrog product that is way smarter than any mobile device has ever been, and super-easy to use.
This is what iPhone is, okay? Why do we need a revolutionary user interface? I mean, here is four smartphones, right? Motorola Q, BlackBerry, Palm Treo, Nokia E62, the usual suspects.
And what’s wrong with their user interfaces? Well, the problem with them is really sort of in the bottom 40 there.
It’s this stuff right here.
They all have these keyboards that are there whether you need them or not to be there. And they all have these control buttons that are fixed in plastic and are the same for every application.
Well, every application wants a slightly different user interface.
A slightly optimised set of buttons just for it. Well, how do you solve this? Hmm.
It turns out we have solved it.
We solved it in computers 20 years ago.
We solved it with a bitmap screen that could display anything we want, put any user interface up, and a pointing device. We solved it with the mouse, right? We solved this problem, so how are we gonna take this to a mobile device? What we’re gonna do is get rid of all these buttons and just make a giant screen.
A giant screen.
(audience applause) Now, how are we gonna communicate this? We don’t wanna carry around a mouse, right? So what are we gonna do? Oh, a stylus, right? We gonna use a stylus.
(audience laughs) Who wants a stylus? You have to get them and put them away, and you loose them. Yucks! Nobody wants a stylus, so let’s not use a stylus. We’re gonna use the best pointing device in the world. We’re gonna use the pointing device that we’re all born with.
We’re born with ten of them.
We’re gonna use our fingers.
We’re gonna touch this with our fingers.
And we have invented a new technology called Multi-Touch, which is phenomenal.
it works like magic.
– I love that these jokes still work 10 years later. This is so profound and so important, because what this did was it broke that tradition from the past, where essentially, you had an expert device like this and replaced it with a novice accessible device like this. And what this allows for is a see change in the type of user that can suddenly experience computing. I don’t know if you guys have ever tried to pinch to zoom in a magazine, but it doesn’t work.
(audience laughs) But if you’re one and if have an iPad, it works, right? So as it turns out, this device combined the best of the personal computing innovations. Entertainments, music, movies, games, communication, and of course a sensibility model. When the iPhone of course first came out, there was no App Store.
The web was the platform for innovation, and suddenly, that went away quickly with the App Store. But it did lead to an explosion and adoption. Now there’s one thing that was missing from this device, and of course, that was the social network. So after the iPhone came out, this really gave an opportunity for Zuckerberg to come along and created declarative model of interaction, where users could literally just tap buttons and connect with people and things without having to really understand the system that much. You can literally just go around and smash keys, and you’ll eventually get advertising that’s personalised to you.
It’s a great deal, right? So you take an interface like this, and you contrast this where computing had been, and you start to understand how it was that Facebook was able to grow to cover an over two billion people over the last 10 years.
And it turned out that people were using these devices not just for entertainment and not just for apps but for communication, for talking to each other, to connect with each other.
So now, this is something that’s sort of interesting. You guys may have seen this, but as of 2015, conventional feed-based networks, which are largely the way in which people consume social media content on desktop started to, I guess, pell relative to the use of messaging apps.
Suddenly, messaging was the killer feature that was driving mobile usage.
The reason why this is significant, not just of course because it affects our work and of course led to bots and things like that, the good bots, but because when you pull back and think about the total number of people who are on planet Earth, there’s more than two billion. In fact, there’s another four, if not more than that, that are still getting online and are first having their first computing experiences. So this is a map of the distribution of the Internet. The dark blue is where there is 67% penetration. You look at Africa and parts of Asia, and there’s a lot more people yet to come online, which really begs the question, what kind of computing device will be there first? Will it look like the PC from the ’80s? Will it look like the smartphone that we’ve had for the last 10 years? Or will it look something more like this? A kind of ambient room computer that lives with you? Well, how do we get here? Of course, as John mentioned, we went through a period where there was intense battle over defining ownership of this real estate platform with The Browser Wars.
Back then, the idea was just getting businesses to get online and publish anything at all through this Browser war sites.
Mostly informational, mostly taking stuff that they already had in their office and putting it on the web.
But with the App Store, Browser War Apps suddenly didn’t make sense anymore.
Now you needed to have something that was interactive, that did something that was useful.
And in order for apps to be built, needed APIs. So these businesses had to learn how to open themselves up to outside integrations. And what these APIs allowed us to do, and of course allowed flourish for apps, which is leading to what I think of as the Act III of this revolution, which is API-driven, auto configured Conversational Services.
Essentially because businesses now just have APIs. You can ask them to do things whether or not you know about them beforehand. And I think the best demo of this is actually seen in this clip.
– Okay, so what do you need? Besides a miracle.
– Guns. Lots of guns.
– Now, of course this is a little bit off colour in this day and age, with all the issues around gun control, so that’s reason why there is not Amazon guns for example, probably.
But this might be what it looked like if we were living in the Matrix.
You might think of Tank as Alexa’s great grandfather. Since he was actually an AI, he was a voice assistant, and so essentially, Neo’s like, “Hey, Alexa. I need some guns.” (audience laughs) Anyways (laughs), now how does this able to be done? I don’t know if this is in like 20, 40 years or something. Anyways, it’s gonna Blade Runner-ish all that stuff. How this able to be done is that as we have given loads and loads of data to these big platforms like Google, Amazon, Microsoft, they’ve been able to pattern our behaviour using advances in Moore’s law, so computing powers basically become cheaper, algorithms have gotten better, to improve speech recognition accuracy to beyond human capability, beyond human level, which is just mind-blowing.
And so, the reason why voice computing is so interesting is because this represents the new battle line, the new Browser fuel, and how people are going to interact with their computers. And so it’s significant that in 2017, the echo-enabled devices already have about 70% market penetration, which is enormous. And one of the consequences of this, I suppose, I mean, there’s many consequences, but two are the ones that I think are pertinent. One, just like multi-touch, allow the generation of new users to start using computers, voice is even more accessible, right? You could just sort of speak out loud, say whatever you want.
Most people, even when they’re completely wasted, can say something, right? It also means that people can get lazier and lazier in how they express their need and desires. And so I’m gonna play a clip from Scott Galloway about how this will impact brands.
– Voice-based ordering eliminates the need for packaging, design, and end-caps, all the things that brands have poured billions and decades into perfecting. The decline of brand began with the advent of Google. And everyday, fewer people put a prefix of a brand name in a Google search.
The same is gonna happen with voice commands. Our research reveals that over the past year, non-branded product searches have increased in every CPG category.
Prediction? The decline is going to accelerate.
The death of brand is here, and it has a voice, specifically Alexa.
– Right? So if you are Energizer, you know, the question you should be thinking about is how much longer can you keep going? Because just as web searches shown, that people get lazier and lazier over time. It’s even worse when it comes to voice search. So once, you might have said, “Hey Alexa, order me some Energizer AA rechargeable batteries.” That’s a mouthful.
Soon, you’ll like, “Alexa, just get me some rechargeable batteries.” And the reality is, most of you just be like, “Hey Alexa, buy batteries.” Right? And the next thing you know, boom! Right? (audience laughs) Like that one, huh? So the reason why this is big is because like Google’s kind of fucked, right? Like, in terms of the distribution of Alexa, most users are going to Amazon to start their searches. And so, Alexa represents a foothold, if not a kick in the nuts, to Google to basically say, “Look, there’s now another game in town, and it’s able to do a lot more things than previously.” And not just that, but Amazon is willing to spend its way into every possible market, right? So this is Scott Belsky’s quote, but essentially, anyone with margin to spare, beware. What is margin? Margin is basically brand.
It’s the stuff that Scott Galloway was talking about, the end-caps, the branding, the extra trinket in some materials that previously were the ways in which you sold things and you were able to make profit.
Amazon’s like, screw profit (laughs), we’re gonna own the market.
And so this is why voice computing needs to become a lot more intimate.
People need to basically attach these devices to themselves so that the moment of intent, essentially the moment you have a desire for anything at all is captured by these companies. So of course Apple came out with the earpods. I think this year, last year, whatever it was, recently. Google came out with their own version called the Pixel Buds.
So there’s a race to basically literally get inside your head, inside your mind. It’s pretty interesting, right? Okay, so this is a lot of preamble, right? Like where is this all going? Well, I mean, remember, we’re in the era of the electron. We’re ending the chapter of the personal computer. And what’s about to happen has never really happened to humanity before, so I don’t know exactly what’s gonna happen, but there’s some things that I wanna look at and show you. Now, one of the great things about the electrons is that it’s very malleable, much more malleable than even metal.
Because it allows us to do things based on just pure information.
Now here’s another clip.
I know it’s like Steve Jobs all day long, but he said a lot of useful things back in the ’80s, and so I’m gonna play another clip from him sort of describing why the electron is so interesting. – Apple is eventually going to have a broader line products, simply because, let’s look at automobiles.
Compare Volkswagen Rabbit and a Dump Truck and a Mercedes Benz, let’s say.
They’re all transmissions, they’re all engines, four wheels, seats.
They’re basically all form of same basic function of transportation.
What’s the difference? It differs in emphasis.
– So I apologise for the quality of that clip. That was like VHS, probably like saved over multiple generations.
Anyways, he was answering a question when someone said, “Will Apple ever make another product besides the Apple II?” Right? And (laughs) he’s like, maybe. Maybe. And his answer was again to go back to the combustion engine and say, “Look, it’s kinda like these things.” It’s a matter of emphasis.
We’re able to shape our products to fit the needs of our users.
And so that sort of looked a little bit like this, where you have some Amazon devices here, you have a device for reading.
Well, long battery life is very important.
Then you have like the iPad, which is great for writing, drawing, notes, whatever. You have the phone, you take it with you, you make phone calls.
Well, thanks to Moore’s law, that’s to the production of these devices and the lowering cost, we’re now able to see what that starts to look like in 2017, where that difference in emphasis is becoming more pronounced, where there’s literally a PC chip or multiple chips in almost every comsumer product that’s being put on the market.
And we’re increasingly able to interact with these devices through multi modes of interaction, whether that’s voice, whether that’s photos or sound. So for example, that’s the Echo look in the centre there, and it’ll take a selfie of you and give you feedback on the clothes that you’re wearing. So it will help you buy better things because it will tell you what you should buy. And so these devices are becoming increasingly personal as well.
Remember when I talked about the television and it was sort of a one to many broadcast platform? Well now, each of these devices is personalised to each of us.
If Alexa doesn’t know your name or Siri doesn’t know your name, or somehow forgots, that’s really frustrating, right? You don’t expect your TV to know, but that’s gonna change very soon.
And so, this is the way in which Google is sort of pitching its assistant.
– [Alexa] When we started, we made this for everyone, so that everyone could find anything they need among the millions of bazillions of things in the world. Today, it seems like sometimes, it’s easy to feel like you need a little help with the stuff just in your own world.
Your photos, phones, videos, calendars, messages, friends, trips, reservations, and so on and so on.
Wouldn’t it be nice if you had some help with all of that? Wouldn’t it be nice if you had a Google for your world? – Wouldn’t it be nice? So, these devices are surpassing everything that the previous generation of PCs did, in terms of their capabilities, their speed, access, personalization, adaptability, the form factors. But so for this next section, you gotta take a leap of faith with me, I guess, because I think you gotta blow your eyes a little bit and do a little bit of strategic forgetting to sort of understand where I think this could go for better or worse.
And I really as a start by looking at, again, if you think about toddler using the iPad, this research just came out last month, it turns out that children age zero to eight, basically 98% if them, have a mobile device in their homes and are using those devices up from five minutes in let’s say 2011, so six years ago, to 48 minutes per day, right? So kids are growing up around these devices, and those are just mobile devices.
We’re not even talking about voice controlled computers. So how does this affect us? How does this change? What is this gonna bring about? Well, how many of you seen this movie? (sighs) So much stuff for you guys to see and learn. Okay, great.
So, this is a movie called Her.
It’s very good.
It sorts of presents this future world of this artificial intelligent agent that, without giving it away, develops a relationship with Theodore Twombly, the main character, and of course the AI’s voice by Scarlett Johansson. So if that’s your thing, then that sounds great. (audience laughs) Now, the funny thing is that this device is sort of like their version of like I guess the iPhone, and so this is the device through which Theodore Twombly carries on his relationship with his virtual AI.
And so, we’re gonna see in the next scene is where Theodore’s talking to his niece and introducing his niece to his girlfriend, Samantha. – Who are you talking to? – Who are you talking to? – You.
– (laughs) No, I’m talking to my girlfriend Samantha. She’s the one that picked up the dress.
Wanna say hi? – Mm-hmm.
– Hi Samantha.
– Hi. You look so pretty in that new pink dress. – Thank you. Where are you? – I am (laughs)…
I don’t have a body.
I live in a computer.
– Why are you inside a computer? – I have no choice. That’s my home.
(laughter) – Right? She’s like, oh that’s so cool.
Like my uncle has his virtual girlfriend.
(laughter) The factors like these, kids are gonna grow up in a world where that’s normal, not unusual.
These are really big deal.
You know why? Because as it is now, I randomly found this on the Internet, you know, as the Internet, you ask and it provides. Essentially, kids are growing up experiencing their relatives, their grandparents, even their parents through the screen, through this two-dimensional sheet of glass. And this thing is talking to them, it’s interacting with them, right? In fact, that’s one of the big features of the Echo Show, this new sort of intercom device that Amazon has put out.
And in fact, it features permanently in this clip, where they announced this product.
– (laughs) What do you think? – I think it’s awesome.
It’s like we’re in space.
Can we show grandma? – Sure.
– [Announcer] This is dropping.
An easy way to be together.
Once your closest friend and family have granted access, you can see when they’re available.
Then, just say…
– Alexa, drop in on grandma.
– [Alexa] Okay.
– [Announcer] If your contact isn’t up for a drop in, they can answer with audio only using their voice. Fortunately, grandma is always up for a chat. – Oh, hi sweetie.
– Check it out, grandma.
– Oh, wow. That is amazing.
– (laughs) I wish I could tell you this is an episode of Black Mirror, but it’s not. (audience laughs) And so, to me, I don’t know.
Maybe I’m just strange, but where my mind goes, it’s like, you know, I remember this cartoon from 1993, where, of course, on the Internet, nobody knows if you’re a dog.
And similarly, no one really knows this grandma’s real, right? How does that change the way that we think about relationships? How does it change the way that we think about human connection? What does it mean when we can actually invite this sort of artifice of humanness into our lives? I mean, and it’s not like this is all make-believe anymore, like it’s happening.
– Sofia is capable natural facial expressions. She has cameras in her eyes and algorithms which allow her to see faces so she can make eye contact with you.
And she can also understand speech and remember the interactions, remember the face. So this will allow her to get smarter over time. Our goal is that she will be as conscious, creative, and capable as any human.
(laughter) – As long as she’s not like, “Tank, I need lots of guns.” (laughter) So why is this a big deal? Again, think about like this team, right? I mean, we’ve been lying to kids for years about artificial characters, and now suddenly, what? We’re gonna tell them, no, grandma’s not real or she is real, or this Samantha character, you know, is someone you can talk to.
Kids are also growing up in a more plastic role themselves. And in fact, they’re probably doing a better job at adopting these products.
I mean, now that I’m old, I’m like, shit, like how am I supposed to use this stuff, right? So they have this whole different way of experiencing technology and the media around them that we never had before.
And it’s super easy, like literally, you just push on your face and suddenly, you’re like barfing rainbows.
Okay, so if you think Sofia, that robot is weird, and you think that grandma is or isn’t real, and the Snapchat lenses are kinda cool, well, I’d like to introduce you to someone that I think to me represents the future of where this stuff is going, right? If Scott Galloway is predicting the death of brand as we know them, and the Energizer bunny is no longer cool, because it sort of made sense in the one to many broadcast era of media, then I think Miquela is interesting because of what she represents.
Now, Miquela has 438 thousand followers on Instagram, so she’s doing fairly well.
She obviously takes lots of selfies, she hangs out with friends, she has lots of conversations, she even has like a point of view, right? Where she’s posting various media, social media you sort of expect.
She gets thousands of comments and likes on her posts, and she’s been interviewed in a magazine.
Now, as it turns out, Miquela’s actually a musician, too. She has an account on Spotify, and she’s even created a few playlists.
The thing that I find so compelling and so interesting about Miquela is that she’s not real, but most of the people who are leaving comments on Instagram don’t seem to know or don’t care.
There’s a generation of kids that are gonna grow up believing in this, because this reality is preferable to the alternative.
Now, it’s not the first time that we’ve seen sort of an artificial musical act become popular, but the difference is the degree to which people are willing to believe that these things are real. – You think that’s air you’re breathing now? – So if you guys have seen the Matrix, you’ll understand that thing.
To maybe explain it in a different way is that these devices that we have, now the iPhone X of course, we’re moving into an era where the separation between reality and augmented reality and fake news is completely being blurred based on your own subjective experience.
In fact, if you don’t like your face, you can just get another one.
And, not to be totally dystopic, I mean, there is some great sort of art forms that can come out of this, and at will, of course. I mean, this is a platform for expression, and we’re seeing that people are learning very, very quickly actually how to adopt these things.
(electronic music) Wait for it.
♪ See me right out ♪ Okay, that’s all you need to see.
(laughter) Like, this is a new the animoji karaoke.
It’s a whole sub-genre, as you’d expect.
I mean, like I said, you can grow up sort of animating these things and putting yourself into the matrix if you will in really interesting and compelling ways.
But there’s also another side of that, and this goes back to some of stuff John was saying about our responsibility.
So this little clip here is from Alan Warburton’s excellent video Goodbye Uncanny Valley.
Check it out.
It’s about 3D rendering mostly, and how we’re getting better and better at that, but this is some of the stuff that I think we should be very, very conscious of and careful of. – About CGI reaching the frontier is that the easier it becomes to counterfeit an image, as political propaganda for instance, the more difficult it is to convince someone that an image is real.
In other words, as computer graphics get better, we believe all images less.
A great example of this is the Tumblr blog Hyperrealcg, set up by Kim Laughton and David O’Reilly in 2015.
They collected mundane photographs together under the guise of computer generated renders as an in-joke about obsessive CGI culture.
But the blog quickly attracted click-bait journalists. The joke was that no one would realistically spend weeks faking such mundane images.
But this was lost on the media.
They instantly assumed that even the most commonplace images can, will, and are being faked by armies of faceless CGI artists.
This points to a popular assumption that photo reality can be hijacked at a click of a button. And that’s absurd.
But, just maybe, given the sophistication of recent machine learning techniques, not such a far-fetched idea.
– Stop these kinds of attempts, and to ultimately destroy ISIL.
The extraordinary people in our intelligence military, Homeland Security, and law enforcement community. – Right? So now you have fake news about fake news. I mean, how can we trust things anymore if you don’t even know what is and isn’t real, right? I’m trying to depreciate this.
(laughter) I didn’t record this with you, right? (laughter) So if reality is this malleable, (laughs) this is like a running commentary on my talk. (laughter) I’m sorry.
Anyways, (laughs) I mean, clearly as adults we’re not ready.
I can only imagine how the kids are gonna cope when this is the experience of their dad.
Worse if they’re actually able to produce these things. – [Audience] They can.
– I know. They can and they will, and they probably are and you don’t even know it on their Finsta accounts.
So I guess I’ve just been thinking a lot about the web that I grew up on.
Like I graduated high school in 1999.
The world didn’t end in Y2K, for those of you who remember, but it’s been crazy to see, in some ways, how we sort of failed our way into success. You know, lots of us, in the early days of the web, didn’t really think that everyone would have web connected device in their pockets and would be using these devices in his ways 10 years on, which sort of had this vague hope that if we just connected everybody, like somehow liberal values and inclusion would sort of be part of that world.
More funny that that’s actually not the case. Connection is not enough.
And so we have to be very, very careful about the ways in which we continue to extend ourselves and build these platforms and design these platforms, to think about how we do incorporate human values, and what values are important.
How important is truth? How important is understanding reality or objective reality? Is there such a thing as objective reality? It requires us to sort of think much more expansively about human experience in the ways in which we now are having various objective and personal experience, and yet increasingly, we need more and more empathy to understand other people’s experiences than ever before. So, to bring a sort of background to the beginning, the death of the PC as we’ve known it represents a new chapter starting in the era of the electron.
But I don’t know that we’re really prepared to cope with the challenges and changes that are gonna come.
In some ways, we have to provide direction and guidance. We’re gonna say, these are the things that we hold important and dear.
We should not forget as we go forward.
But as we increase in luck-shared touchdowns of what is good and what is not good and how we’re being manipulated into believing certain things, we have to sort of, I think, take a step back and have some patience as we sort of unfurl what’s going on. Now, I wanna leave you with one quote that I think McLuhan used well to describe what was going on with television, which is that as we shape our tools, then variably, they come back around to shape us, to shape our consciousness, to change the shape of our thoughts and the way that we connect and behave.
Now, the good news is that even as we ourselves may have certain biases and desires and ways that we wanna see the world unfold, there’s another generation that’s coming up that is a lot more open and a lot more plastic, and I think they hold a lot of the promise. Perhaps it can go well, as long as we work with them to understand where we come from and the hopes and dreams that we’ve had along the way. – Hi, wobot! Hi, wobot! Hi, wobot! I wuv you, wobot! – So if you can imagine the innocence that kids bring to this question, then I think there’s hope for us yet.
(laughs) Alright, that’s it. Thank you.
(audience applause) (light music)