Designing smart things: Balancing ethics and choice

(funky electronic music) - Wow, good morning, it's quite a crowd.

15 years, congratulations, John, thanks for having me. I'm gonna talk today, as you said, about designing smart things.

And what I mean by smart things is, people like to talk about artificial intelligence, and I probably don't need to tell you, we don't really have that.

We've got some math, we've got some algorithms, we've got some TensorFlow.

I'm just gonna sum all of that up and talk about smart things rather than trying to dissect all of the different angles there. I really appreciated, this the other day came up on Twitter. If it's written in Python, it's machine learning, if it's written in PowerPoint, it's AI.

But the reason I wanna talk about this is that it is different to design things that are smart, things that are dynamic that respond to a specific user or a specific context or some sort of data.

Because as a long time product designer, I maybe have methods and I'm used to designing these monolithic journeys where you start and you go through stuff and you end. And with things that are responding to you in real time, that really changes.

These things are erratic, dynamic.

They are hard to understand what is going on inside, if any of you do artificial intelligence, if you're the math people, I know you'd like to tell me, even I can't tell you how the algorithm got there. We'll talk more about that.

And I think we know that this can be easily weaponized. These algorithms can, especially if they are opaque, can be used against people, intentionally or not. And we don't necessarily know ahead of time what's gonna work, again, because they're not so predictable.

So there are ways we can change how we work and there are some trends I'm gonna suggest to you all. I'd love to hear from any of you later what you think. And I'm gonna talk about this in three lenses, I'm gonna talk about designing assistants or agents, things that are intended to help you automate some tasks or get things done more easily.

But I also wanna talk about this idea of detectives and defenders, how do we make things that help protect people or help them protect themselves? And then how do you think about, over time, managing these smart things? How do you conceive of them and how do you know if they're working? So we'll kick it off with assistants and agents, I love the movie Devil Wears Prada, if any of you have seen that, that poor assistant, I often feel like she's the blueprint for some of our assistants who get crapped on a lot. We all hear about our kids don't say thank you to Alexa, do they know to say thank you to people? Let's hope.

And I wanna start by saying I am a bathroom automation hater.

I always start with the question of is this automation necessary? How do we know what to automate? Down in the bottom there, if you go to a nail salon, the cheapest version of automation is if my hands come out from underneath the fan, it stops working, which is partially to get me off my phone and to get my nails dried so that I can get out of there. So I just say that there are little tiny things that are actually assistive to us all the time. There's also Alexa or Google Home and all of those things, and I bring those up as to say it can be difficult to understand, I mean, Alexa gets new skills every day. It's probably hard for the people working on Alexa to understand at any given time and to communicate at any given time what she's capable of doing, I say she, what it is capable of doing.

So that's another challenge that we have with these assistants, is how do we conceive of them together as a group, how do we talk about them, how do we chart their growth over time, how do we understand when they change materially? And then there's also, in Google and Gmail, there are starting to be these just little nudges, right, so this is actually, instead of monolithically talking about Alexa or Google Home, it's, the assistive stuff is just sprinkled throughout your experience, which is another different way to think about assistants.

So you don't have to assume, if somebody says, we're gonna build an assistant for this thing, you don't have to assume it's a literal monolithic bot. It might be nudges.

Just the other day, I noticed that my bank has created their own assistant, and at first, I was like, really? Why do I need an assistant for the bank? And interestingly, I had to use it as I was coming here 'cause I needed to understand something about money, and really what it is is just a keyword search, right, some of this assistive stuff is just stripping away crufty UI and making it easy to ask a question in natural language. So these assistants, again, can take different forms and may be more or less necessary.

I know a lot of people worry too that the robots are gonna take our jobs.

And I wanna point out, so Wix, if any of you have used this, is a level of assistive technology that, there are still designers involved in making templates for Wix.

But the benefit, to me, as someone that makes a website as a non-developer is I no longer have to hire three of you all to make me a website, I can make my own website, but there are developers and designers that made this service.

And so I don't think we need to fear that the robots fully take our jobs, we just need to realise that they're going to make some of the more repeatable tasks easier to accomplish and make me, as a developer now, look a little better. So these assistants are dynamic and they're responsive. And they can be emergent, knowing what they're about is emergent.

They can also be a little creepy.

So Gene Lee, who's the head of design or product at Mailchimp was telling a story the other day where he was leaving work and he pulled out his phone, and Google Maps was like, oh, here's your commute track home.

And he was like, I do not like this at all. And other people we were talking with, a couple of them thought, I think that's really awesome. So I just say that you need to be aware that not everyone is gonna have the same reaction to your assistive technology, and you wanna be really clear and do some good studies about when you're tipping into creepy versus assistive.

And then, again, considering that you don't have to go all the way to creating a bot, you can just automate some tasks, and that's a great place to start to learn what people actually value. And so breaking it down into into these little pieces. And then you can put them together into one product or one experience if that's what's needed.

I also mentioned, is this necessary is a good question, so I was working on a surgical robot at this startup, and we would go, we would have the robot, we'd have cadavers or animals and we're run, we'd do an operation, essentially, and I would be watching this and I would observe little problems here and there and I'd come back to the developers and say, oh, here's a problem I spotted and here's a solution I have to get around it and they'd say, don't worry about that, we're gonna automate that.

And I would think, should we automate that? Can you automate that, is it gonna be any good? Because as an interface designer, right, I'm there to be the interface between the human and the tech. Well, when the superintelligence gets here and the tech is really good, you don't need me. But on the road to getting that intelligence smarter, you need someone involved to make sure that we are making that experience understandable. So I knew the doctors well enough to say sometimes, I know that doctor doesn't want that automated. I know the doctor actually wants to control of this piece, so don't just jump to automating that.

Over here, maybe this is something that we could reliably automate.

So asking yourselves how well can we do it and should we do it is a good first question. Okay, so next, I wanna talk about this idea of detectives, so I mentioned earlier the math, people like to say I can't tell you how the algorithm got that. Cool, cool.

But you're gonna have to start.

Because consumers are gonna start wondering how you got there.

And I don't mean the math, don't show me the math. I mean like Sherlock Holmes, oh, I can tell by the smudge on your collar and the scuff on your shoe, you are a lamplighter from Brixton, you're getting married on Wednesday, right. Just enough little stuff to tell me how you came up with that recommendation or alteration that you proposed for me so that I can trust you a little bit better. I like my English accent there.

I just throw this up there to say, it's not, you can do this on Twitter, you can go through and find out everything, every data point that they're using to target you.

That is not helpful, that's not what Sherlock Holmes is doing.

You've gotta do some interpolation here to help people understand what that suggestion is about. But also, I think we're gonna need to get into helping people defend against or tune some of the technology that we're giving them. So more and more, people are becoming aware that their privacy is being threatened or that they're being unfairly targeted.

There's examples all over the place.

I recently saw, and maybe some of you have seen this, there was an AI that they tried to train to tell the difference between different types of Asian people based on head shots.

And I think probably the assumption was, well, we're going to, it's gonna use physiognomy, eye shape or cheekbones or something, and the intelligence did actually learn to discern different types of Asian people. But it was actually through cultural clues like hairstyles or collars or it was, it learned, it's gonna learn in ways that are different than we think. I don't wanna get totally dystopia here, but I think it is worth thinking about, so for those of you who haven't read Cory Doctorow's new book, Radicalised, it's four short stories that I think are great little provocations, almost designed thinking exercises about how it goes wrong. I do a lot of work in the medical realm and that is a field that is just now catching up to like oh, dear, people can and will hack us, we should think about that.

That is not an assumption that that field had very long ago.

And then on the right there is Venmo.

And I was, I had, this is from a friend who I know does not live her life out loud, and I'm flipping through the app and realising, hey, girl, your stuff's all out there on the Internet, you might wanna batten that down.

And that was because Venmo decided as a baseline to make financial transaction data public.

Which just seems nuts, I don't know, I would love to have been in that meeting where somebody said, let's make it all public, and everyone went, sure.

Maybe I'm not the generation they're targeting, but that just seems like you can make some good assumptions and good defaults rather than exposing people's information. Obviously, if I wanna make it public, sure. Not entirely sure why I'm doing that, but at any rate. Thinking through what are the ways in which information can be misused or unintentionally made available is something you need to start thinking about, as we talk about use cases, you gotta start thinking about abuse cases, and maybe you've already started doing that, but it becomes even more important when you're in these smart thing design sessions because again, they're unpredictable.

So you've gotta be even more focused on what could go wrong. And I'm getting really interested in the realm of security and privacy, technology, and I know there's a lot of it out there now, maybe if some of you work there, I'd love to talk to you. Kelly Shortridge, there's a great podcast, Security Sandbox, and she was talking about the need for UX in privacy and security, because so much of it out there doesn't get used because it's too hard.

Right, it takes a cadre of super senior engineers to wire up and make different security and privacy technology work, and her argument that I think I totally agree with is a little privacy that gets, a little security stuff that gets used is better than really robust security stuff that doesn't get used, right, that seems obvious, but so many of these companies are optimising for battening down the hatches, making it so difficult to use that people aren't using it correctly or using it at all.

And so my challenge to you all, if you work in that field, is to think about that user experience, and people like to have the, well, it's either secure or it's usable paradox.

I say to that, like I do to the I can't tell you what the math says thing, hey, tough cookies, we gotta crack this, we're smart people, we can figure out how to make things more usable, not totally unsecure.

So my provocation to you here is to help people understand how that data is being used, give them their Sherlock Holmes.

Let them tune and throttle those recommendations, so I am now coming up on one year Facebook-free, yes. But before I quit that, Facebook thought I was an African American woman on some days. Selling me products for natural hair and just, I wanted the ability to say, that's not me. And if you can give people usable ways to give you that feedback about how your algorithm is doing for them, you get better data, they get a better experience.

So again, this isn't about here's the laundry list of checkboxes and I can uncheck African American woman, that is not an experience I'm likely to actually do. This is also like the pair of shoes that follow you around the Internet after you've bought them, how can I say, enough, I got those? So helping people do that is features you might wanna put in your roadmaps.

And then the how might we help people defend themselves, right, start having those conversations, how might this be hijacked? How might this go wrong? And take that conversation seriously.

Again, not to be dystopian, but to be in service to your users.

And then the last thing I wanna talk about this is this idea of roadmaps and wayfinding and how you actually manage, right, so going back to the what is Alexa, on any given day, capable of? How, if you're a product manager, how are you tracking and managing that work? It might be different, again, than when it was a monolithic journey.

When you knew, okay, today, the user story is this and we're gonna do it from beginning to end. You might be working on little tiny tweaks that are hard to show people in total what they're working on. I know I've had this experience again when I was working at the robot.

We had to have a whiteboard full of little automations that we had built in because we didn't have any other way for us to know what to expect, right.

If yesterday this wasn't automated and today it was automated and somebody's going through using that, we had to have this whole laundry list of things that they had to be aware of, it was not a good system.

So what I'm suggesting, and again, because the AI does not learn the way we think it does, right, so this is a reference to the idea that if you built artificial intelligence and you told it to make paperclips and then you just let it turn, there's a possibility that it just consumes all of the materials in the universe to make paperclips, because it's not smart enough to know it should stop at some point, right, it doesn't know that human beings already got wiped out by the first round of paper clips and it's still printing them out.

So if they're not gonna learn like us, how are we gonna manage how it learns? John mentioned PG&E, so yes, I worked at PG&E, it's 110 year old power utility and it's most of the state of California, it's the largest utility in the US and it's one of the largest in the world.

And there was, and they have something like 2.4 million electrical poles.

And somebody did some math and said, okay, well, if we're up to inspecting 10,000 poles a year, hooray for us, and then somebody else did some math and said, that means poles have to last 200 years and we're gonna need to figure out which poles to target, so let's create an algorithm that tells us where we should focus our work.

And they're gonna march off and do that, and that's not a very controversial, right, I have no problem using algorithms to judge electrical poles, I have lots of problems using algorithms to judge people. But I said, before we just go off and do that, would you all indulge me in a design workshop on this algorithm so we could co-create this together? Again, I'm not a math person, I'm not gonna write the math. But I wanted to see what this would be like, and so here's an example of a very simple model, okay, index of criticality, how health, how healthy is this pole, either reported or likely to be in criticality. If it fails, is it in a wildfire zone, that's super critical, is it in the downtown San Francisco, pretty critical.

Right, so very simple model, and let's avoid that top right quadrant.

Uncontroversial.

Then we got into and we had someone (mumbles) all this data. But this was, went through this conversation here of what data do we have and what do we know about it? And well, this was actually the most interesting part, because this is where there were long time business people in the room, there were developers in the room, designers in the room.

And the business people could do things like, oh, that number is artificially high, because people in the field are stuffing it with, they're using that field to indicate something else, right, so it's called crossarm, wood, but that's actually a dumping ground for this other piece of data, so don't trust that, right.

We could have these conversations of whoa, that is a lot of, I think I might've blocked this out, so cars hit poles so often that it's actually called a car pole.

That's a field.

And there's some discussion about gosh, maybe we need a separate work stream on how we avoid so many car poles, 'cause that's a lot. So workshopping this together just helped us all better understand what went into that algorithm. It helped me as the designer of that system, who was then gonna make that information available to the people in the field, it helped me understand how to frame that data and that algorithm so that they understood what was behind it a little bit better.

And I encourage you all to do this together, to not just leap into making the algorithm, because once you've done that, again, it's opaque, it's really hard to unpack that.

So you'll get good shared mindset if you can co-create these algorithms a little bit.

And then on the side there, it's just here's our, again, our model of what tends to go wrong and where we can focus.

Yeah, turns out trees and squirrels are the number one problem.

Although I guess you don't have squirrels here, I just learned, there's no squirrels, there's really huge spiders.

Cute fuzzy spiders, I'm gonna look for one after this. It's also important, as a product person, I mean, also developers and designers, but know why something doesn't work.

So this is an example, Google Maps launched their AR walking directions.

Has anybody, is that out here, is anybody in that beta test, in that A/B test? The solution they arrived at is pretty pedestrian, it's not that crazy, it's pretty simple.

But the designer on that, I got a chance to speak with her, and she made 120 prototypes and tested 120 prototypes. This is one that was so beloved.

This little fox, you follow this fox down the street, so cute, literally everyone loved it.

Except it's Google, so then people wanted to talk to the fox.

They wanted to ask the fox, is this restaurant any good? What's the quickest way, right, that's what you expect from Google.

And the reality is, that fox was not gonna be that smart at launch, so that idea was not gonna work, it wasn't gonna work 'cause it was a bad idea or nobody liked it, it just wasn't ready for the time. And I was saying to her, put the fox on the back burner, 'cause you could bring it back, everybody loves it. Interestingly, after this, also, one of the first things they had tried, it was like you're walking along and there's the blue line, like you have in the car. And people were just following the blue line and walking into things.

So she said, okay, well, that's a design problem, solve for that.

So she came up with an idea of the stream of particles you are following down the road, and it got wider when it was less sure about where to go and narrower when it were more sure.

And she thought, I am such a genius, it's such an amazing idea, I just, I've done it, and this is prototype 50 or something, check. And went and tested it.

And people were like, yeah, this is cool.

What's with the trash? She's like, what? There's trash, garbage here, I don't know.

And she's like, oh, man, my idea didn't work. People were like, no no no, don't give up on your idea that you love, she's like, they literally called it garbage.

I have to listen to that, right.

So you've gotta be able to take your work out into the world and hear back what people actually think about it, and not just whether they like it or not, but how are they actually using it, what are those consequences? So I think it's better when you're planning and roadmapping and managing to not have, you can't do it the way of today, we're gonna build the AR directions, it's gotta be questions like how might we best direct people to the place? And I know that that feels very 101, but I also know that every single day, well-intentioned product managers show up with prescriptions and not questions.

So really, you've gotta push yourself when you're doing these smart things to focus on what that outcome is or what your question is over what the thing is, because again, back to the paperclips, it may not be anything like what you thought it was.

You may get there through a totally different route. You may discover, because people call your thing trash or love your fox, that things are different. And then how is the team gonna know what's working, what is your learning loop really like? Because it's not gonna be the way, again, where you have this monolithic journey, throw together a clickable prototype and get some quick feedback.

It's harder to do when you're actually testing something that's got some smarts in it.

So I know at Google, they literally have a whole team that is about that rapid prototyping, and they're taking that stuff right out to the street and putting it in the hands of people to have a real life relationship or test with that. So think about how you're gonna know what's working, because it's not gonna be obvious.

Your math people are gonna tell you the math is the math. And you're gonna need to be able to take apart what's actually working for people with how the tech works.

And then yeah, is it smart in the right way? So is it bathroom automation where I'm arguing with, just give me the water.

Is it the doctor who really wants control over this part of the robot but not that part? What is smart? So those are the three lenses for designing smart things, so if you're gonna make an agent, what are you automating and how do you explain what that does? So I don't know, before I showed that example, if people would've called Wix automation, and I don't know that I would've either until I started getting into doing this talk. Because to me, it's a tool that takes away my need to hire web developers.

But really, it's just automating some parts of web development, because web developers and designers made Wix.

So how are you going to explain what your thing is capable of and what it's automating? And then, for those detectives and defenders, what are the smarts doing, how can I interrogate that algorithm? Lightweight, again, don't show me the math, don't show me the million checkboxes, but give me some clues about how you're making those decisions for me.

And then how do I avoid those negative outcomes? I was actually at a conference recently with a bunch of biomed people.

And they were saying, please, designers and developers, go to the, well, I'm not in the US, but the DOD, the Department of Defence, start getting involved in truly defense-related things, because it's going to get worse.

So one guy was telling me, do you guys know what CRISPR is, right, CRISPR is technology that lets us edit genes pretty effectively. And you can pair it with something called gene drive, which is just the idea that you attach it to a gene that your population is very likely to take up, and the next thing you know, you're editing whole biome, whole genomes of things.

So I feel like some day, I'm gonna have an app on my phone that's like, oh, okay, my genome was updated at 2 p.m., yeah, no, that one I asked for, yeah, oh, no, that one I did, no, did not want that one.

We're gonna have to start putting defences in the hands of the end users, because the police are not gonna necessarily do it well for us. So how do you help people defend themselves? And then roadmaps and planning.

What are we, how do you know what you're gonna automate, how do you know what to develop, and how do you know if it's working? And again, you've gotta come up with new ways to get that feedback from users or to do those learning loops.

And it might take a little bit more than, again, those clickable prototypes or things you're used to doing for the quick and dirty feedback.

Not to say you can't do lo-fi prototyping, but at some point, you're actually testing the math, 'cause you wanna know, is the fox too smart or too dumb? Is the trash trash? And so here's just some resources.

Superintelligent by Nick Bostrom, the idea of superintelligence is when the technology becomes as smart as humans.

And some people say, somebody yesterday was saying Elon Musk is saying it's happening by 2030. Most people think it's more like 2050 or 2080. I'm starting to become sceptical that that's really even gonna ever happen, the superintelligence.

Anyway, my point here is, when it's superintelligent, you don't need me, you don't need designers, right. But on the road to superintelligence, you're gonna need someone who can really make sure that the user experience of your AI is working for them and not just working for the developers or the person that issues the orders. And Weapons of Mass Destruction is a really great book. Cathy O'Neil talks about the dangers, and it's not a total dystopian thing, there's a lot of practical advice in there. And one of the things she talks about is knowing when there are good indicators for, (mumbles) I'm thinking here.

So basically, she brought up the example of, in the US, we have standardised testing for schools, right, and this is put into place because we decided we needed to have some data to know whether our investment in schools was paying off or not.

It's really more a test of the school than it is the student.

But it gets misused because it gets attached to very high stakes outcomes, right.

So some teachers have lost their jobs, for example, when their test scores aren't good enough.

And what's sad about that is I've been involved with enough of that data to know, those results aren't very stable, right.

So a teacher is excellent one year, terrible the year after that, in the middle, back up to being good.

And when you start to see people moving around in your algorithm like that, that's a good indication that you don't have a good algorithm, that that's not a good predictor.

'Cause that teacher wasn't great, terrible, okay, great, okay, great, right.

They're probably in one of those categories. So you wanna look for that stability of outcomes. Her counterexample to that that I really enjoyed was the idea of credit risk, and I don't know if you have that here, but in the US, we have this idea of credit risk.

And it turns out that, and this is, I did not plan this, one of the best indicators for whether you're at credit risk or not is whether you pay your power bill on time.

PG&E did not pay me to say that.

But it's just to say that someone who is predictably paying that bill, that shows that they have a stable house and all of that, that's a good indicator of outcomes. So you've gotta be looking for things that give you good predictive outcomes and not just things that you think are giving you good outcomes.

And one way to test that, again, is to see how stable are my predictions.

And then Sonia Koestering on Designing with Bad Data, this is a talk she gave at I think Interaction a couple years ago.

One of the things she talks about is something like 12,000 people a year are declared dead in the United States that are not really dead, their data was just screwed up somehow.

Turns out it's pretty hard to come back from that kind of death too, just like real death. So that's, she's got a lot of good examples, especially in the healthcare field, about things to watch out for, and I highly suggest that. And again, for those of you who work in security and privacy, I really enjoy the Security Sandbox podcast. So with that, I've also written a book by O'Reilly that came out in April, it's really about collaboration and how we all work together, but there is definitely some parallels to, or some, yeah, some parallels in designing smart things because again, you can't go off and do it on your own the way you maybe used to be able to, right. Probably half, over half of this room could just go off and make an app today if they wanted to.

But as you're designing these smart things and you need to know what should I automate, how do I know if it's working, all of that takes the cast of thousands.

And when you think, again, about how you communicate, well, here's what our AI does today and here's what it's gonna do tomorrow and here's what it did yesterday, all of that takes some storytelling around collaboration for people who are not really close to it.

So this is just me saying to you, be sure to think about your team and your collaboration as you're designing these smart things, 'cause you're gonna need more and more perspectives to make sure it's safe, secure, private, liked, all of that.

So with that, I will say thank you.

(audience applauding) (funky electronic music)