Designing for Voice: Alexa, Google Assistant, and beyond

There are few places where design is less evident than when you use a voice user interface, like the Amazon Echo or the Google home. But as anyone who has used Alexa or the Google Assistant knows, it’s painfully obvious when a voice based experience is not designed well. You go nowhere, fast. It’s the equivalent of a 404 page, but somehow more personally frustrating.

As a former voice designer for Alexa, and a current voice designer for the Google Assistant, I would like to talk about the ins and outs of designing for an eyes-free experience. What is voice design? What does it look like? Why is it important? What happens when you do it well? And what happens when it’s not designed at all.

Start here…

Algorithms.design is a huge collection of links created by Yury Vetrov that includes many of the ones in the document (in a much nicer format), spanning current tools by function, examples of generative design in other disciplines, intros to AI/ML, and ethics.

https://algorithms.design/

 

Also while it’s not specifically about AI – if you are interested in discussions around design ethics, the community in the How Might We Do Good Slack is tackling things like a design ethics framework, collective action, and the toolkit for overcoming the barriers to doing good:

https://join.slack.com/t/howmightwedogood/shared_invite/MjIyMjQ5ODM4NDY1LTE1MDIwMDcyNTItMjAwYjY2Njk0OA

Libratus poker AI and poker AI history

http://www.nytimes.com/2011/03/14/science/14poker.html

http://www.pokerlistings.com/from-loki-to-libratus-a-look-at-20-years-of-poker-ai-development

https://www.wired.com/2017/01/rival-ais-battle-rule-poker-global-politics/

http://www.access-ai.com/blogs/last-human-beat-ai-poker/

https://www.pokernews.com/news/2017/10/artificial-intelligence-poker-history-implications-29117.htm

https://forumserver.twoplustwo.com/29/news-views-gossip/brains-vs-ai-poker-rematch-coming-rivers-casino-1647075/ (CW: this is a poker forum so proceed with caution)

https://www.cs.cmu.edu/~noamb/papers/17-IJCAI-Libratus.pdf

https://www.cs.cmu.edu/~sandholm/Endgame_AAAI15_workshop_cr_1.pdf

https://www.reddit.com/r/MachineLearning/comments/7jn12v/ama_we_are_noam_brown_and_professor_tuomas/

http://science.sciencemag.org/content/early/2017/12/15/science.aao1733?rss=1

https://www.youtube.com/watch?v=2dX0lwaQRX0

http://www.cs.cmu.edu/~noamb/papers/17-AAAI-Refinement.pdf

Poker endgame theory/systems

http://www.icmpoker.com/en/blog/nash-calculator-and-nash-equilibrium-strategy-in-poker/

https://www.pokerstrategy.com/strategy/sit-and-go/sage-sitngo-endgame-system/

AlphaGo

http://fortune.com/2016/03/12/googles-go-computer-vs-human/

https://www.wired.com/2016/05/google-alpha-go-ai/

https://www.reddit.com/r/MachineLearning/comments/76xjb5/ama_we_are_david_silver_and_julian_schrittwieser/

 

I also highly recommend watching the AlphaGo documentary on Netflix!

 

AI design tools and projects

https://wit.ai/

https://huu.la/ai/typesetter

https://huu.la/ai/csstoucan

https://magenta.tensorflow.org/assets/sketch_rnn_demo/index.html

https://magenta.tensorflow.org/sketch_rnn

https://www.adobe.io/apis/cloudplatform/sensei.html

https://www.cnbc.com/2017/10/23/adobe-is-bringing-its-ai-platform-sensei-into-the-banking-industry.html

https://www.theverge.com/2017/10/24/16533374/ai-fake-images-videos-edit-adobe-sensei

https://www.fastcodesign.com/3068884/adobe-is-building-an-ai-to-automate-web-design-should-you-worry

https://airbnb.design/sketching-interfaces/

https://ml4a.github.io/

https://www.youtube.com/watch?v=VLQcW6SpJ88

https://github.com/tonybeltramelli/pix2code

https://medium.com/@thoszymkowiak/pix2code-automating-front-end-development-b9e9087c38e6

https://firedrop.ai/

https://www.theatlantic.com/technology/archive/2017/06/google-drawing/529473/

https://research.googleblog.com/2017/04/teaching-machines-to-draw.html

https://medium.com/netflix-techblog/extracting-image-metadata-at-scale-c89c60a2b9d2

https://blog.floydhub.com/turning-design-mockups-into-code-with-deep-learning/

 

Ethics and AI

https://www.huffingtonpost.com/entry/the-ethics-of-artificial-intelligence_us_599596ade4b00dd984e37cad

https://www.wired.com/story/why-ai-is-still-waiting-for-its-ethics-transplant/

https://docs.google.com/spreadsheets/d/1jWIrA8jHz5fYAW4h9CkUD8gKS5V98PDJDymRf8d9vKI/edit#gid=0

https://en.wikipedia.org/wiki/Philosophy_of_artificial_intelligence

https://www.theguardian.com/technology/2016/dec/04/google-democracy-truth-internet-search-facebook

https://www.nytimes.com/2017/11/21/magazine/can-ai-be-taught-to-explain-itself.html

https://www.royapakzad.co/newsletter/

https://pubpub.ito.com/pub/resisting-reduction

https://www.recode.net/2017/11/30/16577816/artificial-intelligence-ai-human-ethics-code-behavior-data

https://www.theguardian.com/inequality/2017/aug/08/rise-of-the-racist-robots-how-ai-is-learning-all-our-worst-impulses

https://logicmag.io/01-interview-with-an-anonymous-data-scientist/

https://www.youtube.com/watch?v=F_QZ2F-qrGM

https://www.engadget.com/2016/08/16/the-next-wave-of-ai-is-rooted-in-human-culture-and-history/

https://www.creativereview.co.uk/delusion-data-driven-design/?mm_5a924d083f282=5a924d083f326

http://www.eyemagazine.com/blog/post/ghosts-of-designbots-yet-to-come

https://www.epicpeople.org/racist-by-design/

 

Automation and the future of work

https://www.theguardian.com/us-news/2017/jun/26/jobs-future-automation-robots-skills-creative-health

https://blogs.adobe.com/digitalmarketing/analytics/next-retail-apocalypse-look-toward-banks/

https://www.nytimes.com/2015/04/19/opinion/sunday/the-machines-are-coming.html

https://www.oxfordmartin.ox.ac.uk/downloads/academic/The_Future_of_Employment.pdf

https://www.npr.org/sections/money/2015/05/21/408234543/will-your-job-be-done-by-a-machine

https://www.fastcodesign.com/90127514/good-news-designers-the-robots-are-not-taking-your-jobs

https://www.fastcodesign.com/3057266/designers-robots-are-coming-for-your-jobs

https://www.wired.com/2017/02/robots-wrote-this-story/

https://www.washingtonpost.com/pr/wp/2016/10/19/the-washington-post-uses-artificial-intelligence-to-cover-nearly-500-races-on-election-day/

https://techcrunch.com/2017/03/26/technology-is-killing-jobs-and-only-technology-can-save-them/

https://hbr.org/2016/11/how-artificial-intelligence-will-redefine-management

https://www.businessinsider.com.au/momentum-machines-funding-robot-burger-restaurant-2017-6

https://www.wired.com/story/googles-learning-software-learns-to-write-learning-software/

https://www.forbes.com/sites/steveolenski/2017/10/17/using-ai-to-improve-customer-experience/#15c58c256925

https://www.nytimes.com/2016/12/21/upshot/the-long-term-jobs-killer-is-not-china-its-automation.html

https://www.ft.com/video/f2196894-ba28-49fb-a0de-933f9d806b35

 

Diversity and inclusion (or lack of) in AI

https://medium.freecodecamp.org/why-we-desperately-need-women-to-design-ai-72cb061051df

http://journals.sagepub.com/doi/abs/10.1177/135050689500200305

https://www.inc.com/nancy-a-shenker/is-artificial-intelligence-a-feminist-issue.html

https://www.technologyreview.com/s/610192/were-in-a-diversity-crisis-black-in-ais-founder-on-whats-poisoning-the-algorithms-in-our/

 

AI progress and current state overviews

http://wolterskluwer.com/company/newsroom/news/2017/09/an-introduction-to-artificial-intelligence-ai-ux–the-human-expert.html

https://ai100.stanford.edu/2016-report

https://gigaom.com/2017/10/16/voices-in-ai-episode-13-a-conversation-with-bryan-catanzaro/

https://medium.com/machine-learning-for-humans/why-machine-learning-matters-6164faf1df12

https://www.wired.com/2014/10/future-of-artificial-intelligence/

https://aiindex.org/2017-report.pdf

https://aiindex.org/

https://drive.google.com/drive/folders/1CD-hDgf684WPCT1fzD0QmnVKC03wZaF1

https://www.technologyreview.com/s/609611/progress-in-ai-isnt-as-impressive-as-you-might-think/

 

Relationship between AI and humans

https://magicalnihilism.com/2016/03/31/centaurs-not-butlers/

https://www.huffingtonpost.com/mike-cassidy/centaur-chess-shows-power_b_6383606.html

https://jods.mitpress.mit.edu/pub/issue3-case

http://www.abc.net.au/radio/programs/conversations/conversations-genevieve-bell/9173822

https://www.theguardian.com/technology/2016/nov/27/genevieve-bell-ai-robotics-anthropologist-robots

 

AI applications in mental health

https://medium.com/@davidventuri/how-ai-is-revolutionizing-mental-health-care-a7cec436a1ce

https://futurism.com/uscs-new-ai-ellie-has-more-success-than-actual-therapists/

 

Overview of design/UX + AI

https://www.forbes.com/sites/forbesagencycouncil/2017/04/12/the-role-of-ai-in-ux-design-computers-will-be-designers-apprentices/#24bc87715452

https://www.usertesting.com/blog/2015/07/07/ai-ux/

https://www.rtinsights.com/artificial-intelligence-and-ux/

https://usabilitygeek.com/artificial-intelligence-fill-gaps-ux-design/

https://www.wired.com/story/when-websites-design-themselves/  

https://bigmedium.com/speaking/design-in-the-era-of-the-algorithm.html

https://blogs.adobe.com/creativecloud/the-rise-of-artificial-intelligence-how-ai-will-affect-ux-design/

https://bigmedium.com/speaking/design-in-the-era-of-the-algorithm.html

https://bigmedium.com/ideas/links/google-teaches-an-ai-to-draw.html

https://uxplanet.org/how-ai-is-being-leveraged-to-design-better-ux-8710efce79a1

https://www.forbes.com/sites/forbesagencycouncil/2017/04/12/the-role-of-ai-in-ux-design-computers-will-be-designers-apprentices/#66f7e08b5452

http://www.drewlepp.com/blog/can-machine-learning-improve-user-experience/

https://medium.com/artists-and-machine-intelligence/rethinking-design-tools-in-the-age-of-machine-learning-369f3f07ab6c

https://medium.com/@creativeai/creativeai-9d4b2346faf3

https://algorithms.design/

https://uxdesign.cc/how-ai-will-impact-your-routine-as-a-designer-2773a4b1728c

https://www.sitepoint.com/artificial-intelligence-in-ux-design/

https://theblog.adobe.com/the-rise-of-artificial-intelligence-how-ai-will-affect-ux-design/

AI/tech fails

https://mixergy.com/interviews/cloudfactory-with-mark-sears/

https://www.thedailybeast.com/ces-was-full-of-useless-robots-and-machines-that-dont-work

https://www.technologyreview.com/s/609611/progress-in-ai-isnt-as-impressive-as-you-might-think/

https://www.theverge.com/2018/1/17/16900292/ai-reading-comprehension-machines-humans

 

Things that didn’t have a group!

https://qz.com/1034972/the-data-that-changed-the-direction-of-ai-research-and-possibly-the-world/

https://youtu.be/5OTnCt3MnUQ?t=5h57m50s

https://www.economist.com/news/business/21664190-modern-version-scientific-management-threatens-dehumanise-workplace-digital

http://alistair.cockburn.us/Taylorism+strikes+software+development

https://www.theguardian.com/commentisfree/2017/nov/24/technology-capitalists-unionised-workforce-tech-sector

http://www.decolonisingdesign.com/

https://www.epicpeople.org/empathy-faux-ethics/

https://xkcd.com/1002/

 

Darla Sharp – Crafting Conversation, design in the age of AI

While all of us having experience designing screens, many of us don’t have experience designing for voice.

Darla currently works at Google (Assistant team) as a Conversation Designer, although the job may also be called Voice User Interface (VUI) Design, or Voice Interaction Designer. In the end it’s just interaction design with a focus on voice.

Google is moving away from mobile-first to AI-first. Google Assistant’s product line is expanding rapidly, including some devices that do actually add a screen (although not as the primary focus).

Design + AI – there is an increase in voice-forward design. The question of course is why? When we all have smartphones why do we need this additional modality?

  1. speed and simplicity
  2. ubiquity

When voice works it really is quicker – there are a surprisingly large number of taps to do simple things. For example you can ask for the latest Gorillaz album in Spotify, much faster than you can open up the app, search for it, find the album and tap to start playing it.

Phones are considered ubiquitous, but as virtual assistants spread to other places they are getting more popular. You shouldn’t be using your phone in the car…. right?! So the ubuiquity is moving to the assistant and not the device.

Design considerations

  1. conversation design, which owes a lot to linguistics
  2. speakers (not the devices)
  3. the tools in the toolkit
  4. expanding ecosystem

Conversation design owes a lot to linguistics; and the way humans process language.

Words (sound into words) → Syntax (words in to phrases) → Semantics (derive meaning) → Pragmatics (interpret meaning in cultural context).

This is really easy in a first language, basically instinctual or obvious. However it is incredibly fragile, if anything breaks the entire interaction falls down. If someone makes a mistake in a second language, it confuses people who are talking or listening to them. Or if someone’s accent makes the sounds hard to understand, the most basic level of comprehension has broken.

How does this break out into conversation design?

Front end:

  • Words: What’s the weather today?
  • Syntax: In Alameda today, it’s 72 degrees and sunny.

Back end (most of the time is spent after this, on logic and UX flows)

  • Semantics
  • Pragmatics

This interaction requires knowledge of the user’s location and preferred units of measurement (degrees F or C?).

Cooperative principle – rules that we innately know, that we use in order to be good conversational analysts.

  • Quality – appropriate for context
  • Quantity – as informative as required (neither too little or too much)
  • Relevance – unambiguous
  • Manner – true

When assistants get something wrong, they will have violated one or more of these principles.

Examples of Google Assistant getting these wrong…

  • Quality – “open uber” → “I can’t open apps” … but they wanted to open an action they know the assistant can do
  • Quantity – (a question about politics/law) → the response had way too much information and wasn’t the right detail
  • Relevance – “what was that last song” → (long plot synopsis of a movie called The Last Song)
  • Manner – “ok google can you tell me directions” → “I can’t find that place” (actually she’s just lying, she can tell you directions)

Cognitive load – this is discussed all the time in voice design. When we listen to people talking, we form a syntax tree that lets us understand the words. We can both listen and process, this is within our capacity of cognitive load.

“I shot an elephant in my pyjamas” can translate into two different language trees. One has you wearing the pjs, the other has the elephant wearing them. We know who is wearing the pjs, but computers have a much harder time.

Example 1:

User: Hey Google, any flights to San Francisco on Thursday
A: Yes, there are four flights. They’re at 1:15, 3:55, 5:05 and 6:35pm. Do you want to hear more about one of these.
or
A: Yes, there are four flights. Big Blue Airlines 47 leaves New York at blah blah blah….

Speakers… people may be speaking in a very large range of scenarios. They may be hands-busy or eyes-busy, they may be multitasking, they may be in a private or public space. Users are all instant experts – we’ve been talking all our lives! So they have high expectations and low tolerance for error.

The other side of speakers is your assistant, which is representing your brand when it’s talking to the user. It manifests brand attributes, it has a back story and a role. If you don’t define all this, your users will!

Text-to-speech can really change the nature of the communication. Simply removing the exclamation mark from “Let’s go!” completely changes the tone. TTS makes the word “actually” sound incredibly rude and condescending, because all the tone and body language is stripped away. So were it might have said “actually” you need to find another word, to design around this issue in the medium.

We have many tools in the toolkit now – people can speak, type, tap and show things to a device. Most organisations still have siloed teams working on these modalities.

The nature of the speech-only signal is unusual. It’s linear, always moving forward (there’s no nesting or layers the way we work on a screen); and they are ephemeral, constantly fading. They were here and now they’re gone – imagine a screen interface that only shows for five seconds before fading away.

There is complexity in recognition and understanding – what users say and mean. ASR and NLP.

“What’s the weather in Springfield?”
→ which one? there are many across America and even around the world

“Play Yesterday”
→ do you mean the movie or the song..?
→ which version of the song? the original or one of the covers?

Text to speech has the rhythm and melody of speech. It’s not just what you say, it’s how you say it.

As more devices become available, it gets more complex to work out how things work across all of them.

There is a spectrum from voice only, to voice forward, intermodal, visual only.

There is a range of user conditions – static or in motion, public or private space, rich or poor touch interaction. Mobile phones move through these, the context changes to the extremes for motion and privacy.

This is also why porting things doesn’t work. If you port a screen app straight to voice, it just doesn’t work.

The number one thing is to design for empathy. That’s a real challenge for a platform as big as Google, but it’s really important… it’s very hard but we try!

Day Two