Designing smart things: Balancing ethics and choice

The design community has gotten very skilled at crafting UIs, whether on a screen, through voice technology, or even via gestures, even as products like the Nest thermostat, the software running wind turbines, and autonomous cars are increasingly automating choices to simplify their use for people. While data scientists and AI experts are becoming more adept at delivering these algorithms, there hasn’t been enough discussion about their impact on design practice.

As the technology we use becomes more intelligent, empowered, and automated, how will we designers need to evolve our practice? And how do we, as stewards of user needs and rights, ensure that the algorithms people use actually reflect the moral and ethical choices we might make as humans? Is that even the goal? Gretchen Anderson explores the challenges we face when designing the user experiences of the complex behavioral agents that increasingly run our lives.

We don’t really need to be reminded that we don’t really have Artificial Intelligence yet – we have lots of clever things, but they’re still just maths.

If it is written in Python, it’s probably machine learning. If it is written in PowerPoint, it’s probably AI. – Mat Velloso

“Smart Things” are different than things we built before. We don’t always know what will work, or even how things work when they do give us results we expect.

Three lenses:
1. assistants and agents – making useful things
2. detectives and defenders – ensuring the actions of things can be explained; and don’t do harm
3. roadmaps and planning – what are you going to build

Assistants and agents

Gretchen always asks “do we need this automation?” A useful example is the nailpolish dryer that turns off if you remove your hands, encouraging you to keep them in there and get them dry… or Gmail nudging you about emails you haven’t responded to. These little nudges are still providing assistance, you don’t have to create a whole monolithic assistant with a persona.

Some of this assistive stuff is just clearing away the cruft of a UI, cutting through with what you need.

Wix uses a lot of smart features to make it easier for people to create a website. It’s automation, but ultimately developers and designers made it… so it’s not the end of humans doing smart work.

Assistants are dynamic and responsive. Their uses are emergent, but they can still be a bit creepy. Not everyone will find it helpful if your maps suddenly offer a route home!

You can start just by automating tasks and learning what people value and need.

Web Directions Summit Day 1

“Is this necessary” is always a useful question to ask. While observing a surgical robot people continually glossed over UI issues by saying “oh we’ll automate that”. She’d ask if that was a good idea – doctors actually wanted to control some things, not let a questionable AI do it.

Also be careful not to focus too hard on a specific solution. Recently people tried to train an AI to differentiate headshots of people of different asian races. Initially they expected it would use physical differences in facial features, but it actually used more cultural cues like hairstyles. AIs don’t work as we expect.

We need to be cautious and think about how things could go awry.

book: Radicalized – Cory Doctorow

Example of an expenses app that incidentally shared more details than it should. It’s easy to create systems that follow a happy path, but we have to consider the potential for abuse or accidental data sharing.

Also we need to make things usable enough that people actually use security features. Don’t make it so hard they turn them off.

Show people how their data is used; let people tune and throttle what your algorithms are doing; and importantly think how might we help people defend themselves from mistakes and abuse?

Roadmaps and planning

How do you manage the emergence of this technology, as a dev or product manager? What is this Smart Thing capable of today and what might it be doing tomorrow?

Remember smart things don’t work or learn like us. There’s a classic thought experiment of an AI told to make paper clips, but not told to stop – would it theoretically just use all the resources in the world to make paper clips? How and why would it know it should stop?

Use models and metrics to make decisions, much as we make risk decisions using an index of criticality/health.

Talk to people who really understand the data – can you trust it? People will know “oh you can’t trust that field, people use it to catch all kinds of other things”. Don’t just throw that data at your algorithm without workshopping the data.

Know why something doesn’t work

There was a beta for Google maps walking instructions – a relatively simple feature, but it had over 120 mockups. Including a much-loved animated fox as a guide, which was ultimately dropped because people expected to talk to the fox. It’s google, they wanted it to tell them about restaurants…

There was another feature of a blue line to follow – a classic way to mark running routes, for example. But people followed it so attentively on their phone that they ran into things. There was another idea of showing a particle field, but people thought it was – literally – trash. They didn’t understand why the feature had rubbish everywhere.

Ask questions, don’t make assertions. It’s really easy to fall into this trap, particularly when a PM turns up with strong opinions from a stakeholder. But you have to be inquisitive and ask questions.

How will the team know what’s working? What is the learning loop really like?

Don’t just build what’s loved, make things “smart” in the right way.

Some resources…

  • book: Weapons of Math Destruction – Cathy O’Neil
  • book: Superintelligent – Nick Bostrom
  • netflix: The Secret Rules of Modern Living
  • talk: Sonia Koestering on Designing with Bad Data
  • Moral Machines, Wallach & Allen
  • blog: simplysecure.org
  • Algorithms to Live By, Brian Christian & Tom Griffiths
  • podcast: Security Sandbox

Gretchen’s own book: Mastering Collaboration (O’Reilly), on how teams can developer smart things better together.

@gretared