Being Human in the Age of AI

There’s no doubt we’ve entered the Fourth Industrial Revolution, a new era of intelligence-based technological change transforming the way we work, live and relate to each other as well as the companies we buy and engage with.

We can already see this through the wave of artificial intelligence-based technologies coming to the fore, impacting all aspects of engagement. Whether it’s servicing consumers through virtual assistants, for example, or pre-empting their choices and preferences through predictive analytics, AI is quickly pervading interactions and their outcomes. AI’s impact is also increasingly apparent via robotics and automated technologies creating a new type of organisation, shaking up ways we work and new forms of collaboration.

But even as we further embrace AI and the idea that machines will make increasingly autonomous decisions, what’s equally clear is how important the human element remains in the enterprise. Far from AI machines running the world, it’s organisations that harness both machines and human ingenuity that will be best positioned for success.

Katja Forbes will talk about how do we design for AI and also successfully balance the rise of the machine with human-centred values and intuition?

We are in an interesting time for AI. McKinsey predict 13 trillion dollars of global economic activity in the next few years. As designers we have an important role in this.

So many AI talks are basically ‘the future is that we will be murdered in our beds by robots’. Katja thinks there are much more positive futures.

The AI-generated candy hearts experiment demonstrates how easily things can get weird when the training data is skewed and biased. A more unsettling project used a neural network to generate pick up lines, although some were quite sweet and hilarious. There is often a lot of unintentional humour in AI work, and we should embrace that more than the idea that Skynet will bring about the end of the world.

To frame what we are actually working with, its important to understand the basic types of AI:

  • Narrow AI, which is what we have now. It’s really just machine learning, these AIs can generally do one single task acceptably well.
  • General AI, which does not exist yet, is things like star wars droids which are sentient beings.
  • Superhuman AI, which also does not exist yet, is the idea of an intelligence that is wildly beyond human capability.

Designing for machine learning isn’t for the hand-wavy future. This is very much for the here and now. – Josh Clark

It’s important that we design things for humans. Design needs to get involved and keep users, humans and society front and centre.

An important factor for any AI project is to set expectations of what it can and can’t do. AI is bad at “easy things” and good at “hard things”. AI is better than humans at hard, pattern-based things like playing Go and Chess. Humans are still better at “easy” things like identifying faces.

How to work out your Minimum Viable Intelligence?

  • Does it respond when you interact with it?
  • Is it competent? The first ten interactions have to be flawless for people to trust it (“10 to win”).
  • Will it survive when people try to break it? Because people will try to break it, and they’ll probably be weird and creepy as well.

If you can’t pass all these things, why would people give the AI their credit card details?

AI gets a lot of over-promising – originally Cortana’s interface boldly said “Ask me anything”, but Cortana really can’t answer anything, so the UI was eventually changed to something more plain like “search”.

Babylon Health does a much better job. It’s an interactive symptom checker, but it repeatedly tells you that it’s not a doctor. Actual doctors still get pretty upset as it can give people a false sense of security when they should see a real doctor. But the site does a good job of setting expectations.

Trusting invisibility – interfaces are disappearing, as people speak to devices or write natural language into a text box. How do people trust decisions made invisibly? How do you design for people to trust the code?

Anything we design will face questions of trustworthiness. – A Designit

AI is absolutely riddled with bias. People often mistakenly believe that AI is unbiased. But as Microsoft’s Tay showed, we teach AI all its ethics and biases – Tay went full sexist, racist nazi in less than 24 hours, with Twitter responses training the AI. People did this to the AI.

Even the simple example of candy hearts had a curious bias towards bears – lots of hearts about bears. But it’s desperately important when you consider that a self-driving car needs to know what a wheelchair is, or it will probably run over people when it thinks a wheelchari will behave like a moving vehicle.

Microsoft tried again with AI, Zo, but created a bot that was much more locked down; and that created its own problems. The new AI would censor things without understanding context, which is arguably as bad or even worse than the original. Its biases are more devious and still quite disturbing. Microsoft have now published a guide to responsible bots trying to make some sense out of these experiments.

AI is now recognised as a company risk. Microsoft and Google measure it as a separate risk.

There are so many examples of bad AI, like Amazon’s sexist recruiting AI… and the only thing people can do is pull the plug. The data has trained the AI (with bad data) so deeply that it can’t be fixed. Many historical data sets accurately capture the biases of their time, and the AI dutifully recreates and perpetuates that bias.

Microsoft have put out some really good AI principles that address these issues. There’s even a card game to test analytical thinking around AI (“Judgement Call”) and open conversations with your team.

So where are various governments with AI?

  • America has an executive law which essentially boils down to wanting to ‘own’ AI. It’s all about national security and ‘upholding American values’; and the word “ethics” does not appear so much as once.
  • Canada have a federal directive that makes ethical AI a national issue. They are focusing on how to monitor AI and have human intervention as required.
  • Australia… well we have some funding (a little under $30m over four years) to create national standards and an ethical framework for AI; although the amount and division of funds raises a few questions.

Choosing your first AI pilot project? Ask these five questions (from Harvard Business Review):

  1. Will it give you a quick win? (results inside 6-12 months)
  2. Is the project either too trivial or to big? (is the size of it right to be meaninful)
  3. Is your project specific to your industry? (company or industry specific projects can be evaluated more easily)
  4. Are you accelerating your pilot project with credible partners? (most organisations don’t have AI specialists)
  5. Is your project creating value? (what is the value and can you articulate it)

How do we avoid bad AI design?

  • There’s a canvas! (Katja points out she did not create it, but it is useful and a familiar format)
  • Expand who assesses AI experiences. The right people are often missing from the teams who assess AI experiences. – Dustan Allison-Hope
  • Explore possible AI experiences – both good and bad. Ask what are the best and worst things that could happen?
  • Design for ongoing human training of AI. Ensure humans can intervene with AI experiences. AI training is an ongoing task.
  • Be more transparent with the public. We should be giving people information about how our AI works, so they can make an informed choice about using it. The more secretive we are, the more Tays and Amazon hiring AIs we’re going to get.

What matters tomorrow is designed today.

The runway for AI is very short. A $13t market means there is a lot of work going on. This is a field that will grow exponentially. We have to think about it today and make sure we are having an impact on what’s being designed.

@luckykat