Artificial Intelligence: Making a Human Connection

We’ve been talking about robots and artificial intelligence forever, or so it sometimes seems. Images of smart machinery inhabited our thinking and our literary and cultural imaginations long before technology made such objects possible. It is tempting to keep separate the art and science of the robot and the artificial intelligence that underpins it. However, there are reasons to thread them back together. After all, the AI of our imagination is the AI we have built.

Genevieve Bell explores the meaning of “intelligence” within the context of machines, and its cultural impact on humans and their relationships. Genevieve interrogates AI not just as a technical agenda but as a cultural category, in order to understand the ways in which the story of AI is connected to the history of human culture.

Genevieve is going to try to talk about what an anthropologist would say about AI.

“I’ve had two flat whites and a jam donut, which means I don’t speak any slower than this… keep up!”

Genevieve has spent the last few years closely watching where technology is taking humans. She’s currently at ANU working on the way that engineering needs to be completely changed to cope with the new world.

All the conversation about AI has not been setting us up for understanding it and making sense of it. Genevieve as an anthropologist wants to bring the people and tech together… but how do you do field research with an AI? How do go hang out with an AI?

How does an anthropologist unpack all this?

Went back to the basics – James P Spradley’s book The Ethnographic Interview. Asking descriptive, structural and contrast questions; with the goal of working out how your research subject makes sense of the world.

  • Descriptive – How do they describe who they are and where they’re from, how do they present themselves and describe themselves? How do they make sense of themselves.
  • Structural – How do people do things? How do they find things, shop for things? How do they make sense of things.
  • Contrast – How do people say they are different from someone else? How do Kiwis and Aussies describe the difference? How do people from Melbourne explain why that means they are Not.From.Sydney. How do people differentiate themselves.

So let’s pretend we’re breaking^H^H talking to the John Bot.

What’s your name?

  • How does an AI describe its history and context?
  • AI… even the term is interesting. How long since we used the word ‘artificial’ as a good thing? These days we’d say something more like Bespoke Intelligence, perhaps organic, locally-sourced.

Who raised you?

  • AI has a lot of parents! Many had perspectives like psychology and mathematics. Wildly different ideas of how to model the idea of a human. Are they electrical impulses in and out of the brain? Are they a superego?
  • Early cybernetics were kicking off in the 1950s, looking at how humans and machines would interface. While interesting this conversation went into extremely dangerous political territory at the time, it sounded a bit too close to socialism. So to get away from the idea of social and political control, the term ‘artificial intelligence’ emerged as something more palatable.
  • So AI has a lot of geopolitical influences in its upbringing. It’s not just who was involved, it’s who was written out of the record (like Turing, who was being persecuted for being gay).
  • Where the money came from is also significant.

illustration of a mechanical duck that could waddle, eat and poop

Where did you come from?

  • We’ve been thinking about bringing machines to life for a very long time. Not just machines, but think of Frankenstein’s monster – brought to life by electricity.
  • So making mechanical things look real has a very long history. AI has ancestors and heritage.
  • Most of our stories of artificial life go badly – think of Terminator, Blade Runner… they all end in tears (in rain). Aasimov at least suggested some laws of robotics to give us a chance of things going well.

What do you do every day?

  • We all have our daily rituals. Things we do every morning; then things we do on unsual days like going to a conference. We will all have used AI today already, even just because we caught a train or bus that uses systems backed by algorithms (the building blocks of AI).
  • AIs are trained with incomplete data about things that have already happened. They are trained by the past, then influence the future by presenting new data to humans.
  • What are your dreams? (it’s hard to ask an AI what it isn’t since it is based on an incomplete dataset of events in the past)
  • Can an AI truly create? Can we have an AI artist?
  • Note the term marked vs unmarked categories. eg. why the “Test Team” is the mens’ team, and the “Women’s Test Team” has to be specified.
  • AIs have issues with unmarked categories; they have problems with extrapolation. Cars that can avoid elk on the road can avoid other four-legged animals, but they run straight into kangaroos because they just don’t move the same way.

@feraldata rocking #wds17

A post shared by Ben Buchanan (@200ok) on

AI stories matter. The way we talk and think about AI changes how we deal with them… consider the linguistic slip of “self-driving car” – we give the car a self. An identity. That’s different from a car with “driver assist” or “autopilot”.

How do we remember to ask more questions about AI and what we’re doing with them? The AI behind our social feeds presents a version of the world we probably wouldn’t choose. Yet we allow them to shape our lives. Very few AIs or algorithms are being created with the user’s best interests at heart, they are created to maximise profit.

slide from the World Economic Forum showing steam power to electricity to computer/automation to cyber physical systems

There’s a lot of talk about the ‘4th wave of industrialisation’. The problem with that is that although accurate it isn’t including human history from the same period.

When engineering appeared as a discipline, it was a radical intervention. It was a way to give structure to a chaotic world. The school of engineering that appeared in France was a response to a lack of structure – the king was dead, religion was losing dominance. Science was a way to give order to the world.

So we can give AI a back story. What about its future?

The next big thing is most likely managing data – the mana on which all AI is built. Although the challenge isn’t really about literally managing the data, it’s the technical systems that are emergent on top of it. There are many things currently considered individual pieces – eg. IoT, machine learning, deep learning – that will eventually become a coherent system.

Genevieve said “I think we need to build the next applied science.” which is how she ended up at the ANU. “Be careful what you say you’ll do, because someone might say ‘great! get on with it’.”

Three questions based on Autonomy, Agency, Assurance. These are loaded words! We live in a world where not everyone has as much autonomy as everyone else. We have difficulties thinking of something that serves us (robot is from ‘robota’ meaning serf) having autonomy. We have trouble determining acceptable limits and ranges of autonomy (assuming there even are any).

Genevieve is setting out to create this new applied science. She’s given herself five years. She’s busy and she’s hiring!

For more, check out the 2017 Boyer Lectures and her interview in the Next Billion Seconds podcast.