Principles of Conversation-led UI

“Conversation Design” is the new frontier in UX right now, but what exactly does it mean? What have we learnt so far? And what are our UX challenges as we design and adapt user interfaces to be led by conversational interactions?

Amy Cleary will outline some key principles in Conversation Design, drawn from her own experience researching and designing start-up to enterprise level chatbots, as well as from reviewing conventional research in the field.

(lively music) – Hello, I’m here I made it. (chuckling) You might have heard the baby crying out the door, so that means that they’re not in the room. But if she does come in the room and makes some noises, I welcome that.

I want her to see this, ’cause she’s never made it to the end of the conversation, of this presentation, without getting upset. So I’m a Lead Experience Designer at Symplicit. I’m from Melbourne, and I specialise in conversation design.

And that’s what I’m gonna talk to you about today. So why do we need to design conversation? We’ve been conversing with each other for 1.8 million years. So you’d think that designing dialogue for a virtual assistant would be quite easy. 1.8 million years ago, there was a leap forward in stone technology. So there’s a theory that language evolved about the same time as this in order to articulate the complexity of creating these new advanced tools.

Millions of years later, what happened? The advance of computers.

They were designed to be used with a mouse and a keyboard, which means that we need to transfer our thoughts into movements, between physical object and the screen.

We follow a system, a series of steps, design patterns, to accomplish tasks.

And while this is at odds with how we normally converse, it’s become second nature to us.

Which is exactly why it’s so hard to unlearn this behaviour. Obviously we haven’t lost the ability to converse with each other.

In fact, talking to each other has become easier than ever before with messaging apps, FaceTime, any time in the world.

So our challenge is this, how do we talk to computers? How do we accomplish tasks that we’re so used to doing with a keyboard and mouse? Through a conversational user interface.

A CUI, a Conversational AI, virtual assistant, virtual agent, chatbot, bot, a voice assistant. Chatbot should be easier to interact with, but instead, they’re failing at the simple task of understanding and answering users’ questions accurately. We need to design conversations to understand what our users want, to guide them to their intended outcomes, and to make that experience satisfying and better than the traditional way of interacting with a graphical UI.

We need to design virtual assistants that are engaging, helpful and efficient.

So today I’m gonna share with you what I’ve learned over the past few years, consulting on Enterprise and start-up chatbot projects. In 2018, Telstra launched its virtual assistant called Codi. Is anyone from Telstra here? (laughing) It sparked a nation-wide outrage.

So upon its much hyped release, it just got smashed.

Thousands of people using it.

It wasn’t ready to take on the huge responsibility– (baby crying) Stella. (chuckling) Of customer service for Australia’s biggest telco. And it’s a classic case study of chatbot failure. So why did Saoirse, I think that’s how you say the name, why did she get so angry? So in a nutshell, she, along with thousands of users, other customers, they were enjoying the 24/7 live chat feature, with a real human agent, or several, and live chat was replaced by Codi.

Apparently without any warning.

So basically forcing people to use a chatbot is not a good place to start.

Users got stuck in a loop trying to get their simple question answered, and it frequently timed out mid-chat.

So Telstra’s brand took a hit from this.

But I’m sure they learned some valuable lessons and hopefully lots of other businesses have learned from this as well.

So why are virtual systems like this so frustrating to use? Firstly, I think AI is a stupid term.

We’ve buzzworded ourselves into confusion about what this means.

The way that it’s talked about, it’s the next big thing. It’s setting up really high expectations.

And while we do have very intelligent technology; so we’ve got machine learning, predictive analytics, image recognition, all of those things; virtual assistants are relatively basic.

I also think that tech constraints and content management are a big part of the issue, especially with enterprises, but that’s somewhat are only part of the picture. So there’s so much innovation happening all the time and it’s just a matter of us being aware of what’s available and having flexibility. So the root of the problem of failing chatbots is due to the fact that they’re being designed in silos, away from the subject matter knowledge, sorry, the subject matter expertise, and the knowledge of the business.

So the bots can’t answer questions accurately because they’re not speaking in a way that a customer agent would.

They’re speaking like an FAQ page, because that’s where the dialogue is coming from. So when I consulted with NAB on their first virtual assistant project last year, we observed some frustrations and unintended behaviours through usability testing.

So the aim of this virtual assistant was to be a faster way for customers to find information, rather than navigating through search results or going to the Help page, or jamming up the phones in the Contact Centre. However what we noticed was that customers were expecting so much more from the virtual assistant. It said it was an assistant, but it wasn’t assisting them.

It was speaking to them like a human, but it wasn’t responding in a normal way.

So I can imagine the interaction feels like walking into a bank, and a banker comes up to you and says, “Hello, I am a banker.

“I can answer questions about banking.” And you say, “All right, I’ve lost my wallet, “and I think someone’s used my credit card.” And then they say, “I’m sorry, I don’t understand. “Can you please try rephrasing your question?” It’s not right.

So our team actually went into a branch, and we talked to the bankers about their experiences with real people, real problems.

What we learned was that they deal with a lot of emotional issues like divorce, death, defaulted home loans.

Their customers are diverse, and a one-size fits all solution is just not going to work. The bank manager told us about an old lady who used to go in every week to withdraw money from a teller.

He wanted her to migrate her to the new smart ATMs. And she was concerned about the tellers losing their job to the machines.

She was also intimidated by the ATM.

So he showed her how to quickly use the ATM, and how quick and easy it was to use, that she could use it at any time that she wanted, even when the bank was closed.

So he was so encouraging and supportive with this. And it’s what we would call in digital terms, onboarding. When we went to the Contact Centre, we listened to a recorded conversation with a woman who had just, she had been trying to get off her ex-husband’s mortgage for about seven years, and unfortunately for her, he was unemployed, he was on disability, there was no chance of him going back to work, or making his mortgage repayments, and the bank had just said, no can’t take you off. So this was a really really difficult conversation when we listened to it, it was a lot of emotion. And he handled it with such empathy and such professionalism.

And the thing is with these Contact Centre people and bankers, all these types of customers support people, they go through intensive training to deal with these types of situations.

So we needed to design for conversations that can provide core value, but they also deal with challenging edge cases. So conversation design is a relatively new field of user experience.

It’s the intersection of technology, language and psychology.

A conversation designer needs to have a general understanding of tech and the tech used to build a CUI.

So that’s the content management systems, prototyping tools, any outputs such as messaging apps, customised solutions or out-of-the-box solutions, and voice assistance.

And only then will they know the limitations and the constraints of building a CUI.

Conversational copywriting is the domain of screenwriters, playwrights and novelists. In fact, screenwriters and comics are often hired to write dialogue for virtual assistants. Google famously hired writers from Pixar and The Onion, to help build in Google Assistant’s personality. Conversational copywriting is very different from regular copywriting. Think of a Choose Your Own Adventure story. So there are multiple pathways that a user can take, and there’s always unexpected twists and turns. In fact, this image comes from the first chatbot dialogue that I designed a few years ago, and I used a tool specifically designed for writing interactive fiction.

It’s called, Twinery.

And these days, people use this tool to write chatbot dialogue as well.

So writing dialogue requires the ability to map out user flows, and also anticipate user interactions.

So it’s basically interaction design, and it’s information architecture.

But like on steroids.

It’s also a content strategy to organising content, defining the persona, sorry, the appropriate persona, and the right tone of voice, tone of voice, sorry, for the audience and for the brand.

And it’s fundamental that the users are included in this. Because without understanding their needs, their pain points, their frustrations, their behaviours, and then observing them using the CUI through testing, there’s little hope for success.

So the role that brings us together, this magic trifecta, is that of a conversation designer.

This discipline can be practised by one dedicated person in the team, or it can be shared; a shared role amongst the team.

I’ve noticed that there are certain skills and traits of a conversation designer, that they must possess, in order to overcome the many challenges involved, with these emerging tech products in particular. So conversation designers are HCD people, i.e. they subscribe to human centred design methodologies. They are AI advocates, also known as chatbot nerds. They’re interaction designers, they’re empathetic, they’re highly organised, and they are very collaborative people.

So now that we’ve covered what conversation design is, I’m going to introduce what I think are the three most important principles of designing a great conversational experience. So first off, the right personality is everything. So this is called pareidolia, this phenomenon. So it’s when we see faces in objects or we assign a personality to something inanimate or abstract.

It tends to include a lot of capsicums, if you– (laughing) If you do a Google image search, which I highly recommend, which is why I spelt that word out for you there, (laughing) houses, who loves to go for a walk and look at faces in houses, or when I was little, I used to see faces in cars. I used to think cars had personalities, and then that movie came out.

They stole my idea.

And this, (laughing) this is what my friend calls a drunk octopus who wants to fight you.

(laughing) I don’t know how many of you have seen this in like, actually the bathrooms of like bars as well. I always used to see this one in bars, I don’t know why, nowhere else.

So people anthropomorphized, it’s what we do. People who engage with chatbots automatically assign a personality to it.

So rather than have 10,000 people assigned 10,000 different personas, take charge of what you want that persona to be. To not design a persona increases the changes that your bot’s gonna be perceived as inconsistent. And as a result, customer trust will drop.

A solid persona that is appropriate for your brand, helps you take control of the conversation, it builds trust and it increases engagement. So weather chatbot, Poncho, was an early chatbot innovator. However, Poncho stands out as painfully annoying for its overbearing personality.

(laughing) Nothing is more grating than a bot that is too strong on humour, the jokes, slang and emojis, when it’s just not appropriate.

Subtlety is usually key unless this is really really part of your brand and users expect it. But in that case, and in all cases, keep it consistent.

And make sure that the bot can do its job first. Scheduling assistant, Amy, she has fooled people into thinking that she’s a real person, to the point where people have asked her out on dates. (laughing) And she and her brother, Andrew, have the right persona for an office assistant bot. The tone and voice is professional, and it’s extremely helpful.

So take advantage of the user’s inclination to engage with something human-like, but it’s a good idea to let them know that they’re speaking to a bot.

Because people don’t like to be tricked.

She probably left lots of people hanging when she didn’t respond to their date invitation. So it’s also important that you understand the target audience in order to design the most appropriate persona for your bot.

JesseBot is a career coach.

So her personality is encouraging, positive, and goal-oriented.

So yes, I have been on maternity leave lately, so I’ve been watching a lot of RuPaul’s Drag Race. Might have watched all of them.

(laughing) So this bot really speaks to me, okay.

Who here has done a similar thing? One, other person. (chuckling) So its personality is perfect for the brand. If you’re not part of the RuPaul drag world, it probably doesn’t make any sense to you, but everything about this dialogue works for its audience.

It’s on brand and it makes sense what it wants you to do. For a banking assistant, a virtual assistant, who is representing a financial institution, or even a government institution, I’d recommend having a bot that is, like the persona is neutral and genderless. But it needs to have a natural conversation style. So it needs to basically mirror their customer service agents.

In this domain, it’s especially important to invest time into crafting and testing the persona of the bot, as trust builds the relationship with the bank, which is critical, especially in the wake of the Banking Inquiry. ANZ New Zealand might have taken this concept a little bit too literally.

Soul Machines is a start-up that they partnered with, and they created this CGI banker called Jamie. So she’s referred to as a digital human.

And she basically just works like a normal chatbot, like the responses are all scripted, she’s just animated when she responds.

That’s what she looks like close up.

Yeah, so I feel that this has crossed into what is known as uncanny valley.

Anyone aware of what that is? So for those who are not so sure, it’s that kind of, it’s that line between something being really creepy, like generally a bot, so that Sophia bot that’s got citizenship in the UAE, I think.

I don’t think she has a body, but she has a head, and she doesn’t have hair, and she just sort of freaks people out a little bit. That’s uncanny valley.

Ideally, a banking bot like this, it should be perceived as warm and friendly, and competent and helpful.

I’m curious to know how customers feel about talking to Jamie.

Because when, and on their devices, ’cause when I have interviewed people, and asking questions about what, you know, how they feel about interacting with a chatbot, many of them say they prefer doing it because they don’t wanna talk to a person face-to-face or on the phone.

Which really says something about their emotional state. So when talking to customers, chatbots need to be helpful, natural and persuasive.

The bot should understand the objective, the motivation and the emotional state of the person, of the users.

And when empathy is built into the conversation, it shows that the business really understands the problem that a customer is having, and they’ll try to solve it.

An empathic and supportive conversation keeps customers motivated, it prepares them for obstacles, and it manages their expectations.

Empathy also prepares for negative emotions. (laughing) So customers, they can be angry, frustrated, sad, or fearful; which, if not managed properly, mostly likely mean that customers will have a bad experience and they’ll refuse to engage with the bot.

Ultimately, they’ll also be unsatisfied with the business. So what does empathy look like in a scripted conversation? This is Lemonade app, an insurance app, that uses a chatbot called Maya.

So if the bot’s responses are framed well, it gets users motivated.

Essentially, this business is saying “Your things got stolen, I hear you.

“Let’s solve this problem.” This business is saying, “You’re going on a holiday? “Great, I’m excited for you.

“This is what we need to do.” And this is Telstra’s Codi, by the way.

So, they’ve improved.

In difficult conversations, Woebot, a therapy companion bot, it doesn’t cast judgement , but it offers proactive tips to help the user without pressuring them.

In lighter conversations, it’s positive and encouraging. It’s also really important to have different error messages, sorry, different error responses.

So that when your bot’s unable to understand the user, it doesn’t sound robotic or generic.

The bot should do everything it can to guide the user to the right outcome.

So by asking the right questions, and also reframing the customer’s problem or information back to them, if possible.

And if the bot can’t solve the issue, then ideally, hand over to a customer support agent, quickly and seamlessly as possible.

So remember the woman who wanted to get off the mortgage? Her ex-husband’s mortgage? The bot would not have been able to solve that problem, like, no way.

But this would be a perfect example where it could gracefully hand over to a human. So that is what we call augmented intelligence, just one aspect of it.

So that’s when virtual assistants can actually work alongside customer service team to help make their jobs a bit more efficient. So when any kind of action occurs, such as a bot handing over, be clear about what’s happening and what the next steps are. This makes the user feel a lot more comfortable. Finally, to make conversations feel more natural and friendly, apply the same communication framework that frontline staff use.

This could be responding to small talk or greetings, like “Thanks” or “Cheers, mate.” My favourite thing is when the virtual assistant closes the conversation with a follow-up message. It gives the customer a sense of closure and that adds a positive interaction.

A bot’s goal is to serve the user’s needs, and help them get things done.

So one way we can enter a successful interaction is by managing expectations.

The bot should be very clear about its ability and range of knowledge, state its exact purpose and limitations, and what the benefit to the user will be by using it. So ideally, we want this to happen in a welcome onboarding message.

This is very important first message, and it establishes what users can expect to get out of the experience, and it literally has to make a good impression from the get go.

It’s hard to get right, but in a good welcome message, the bot should introduce itself, it should state its functionality, and how users can interact with it or what kind of questions to ask it.

Then it should invite the user to take an action, usually through a hook like, “I can do this task faster.” If the chat goes offtrack, then have the bot redirect users back to the right path, or provide an alternative channel for assistance. NAB’s virtual assistant welcome message, it’s gone under a number of iterations, and this is the current one now.

It’s constantly being tested with customers. Voter Bot has good intentions, helping US citizens vote, register to vote, and check their status.

Unfortunately, it’s unable to answer questions relevant to its subject matter.

So it relies on deflection tactics too much, so it sounds incompetent.

If it’s knowledge is so limited, then it needs to redirect the user back to its primary function rather than pretending that it’s gonna find the solution for them. It also invites the user to have another go of asking a question.

So it’s literally setting itself up for failure. Many virtual assistants will require the user to select options or digest large pieces of content. So we can help minimise this cognitive load, and also increase engagement, by taking advantage of visual UI elements such as quick link buttons, breaking the messages up into smaller chunks, and utilising rich content such as videos and image carousel sliders.

Voice assistants will often send links to your device for recipe or Wikipedia article for example. So Lemonade insurance chatbot, this uses visual design patterns really well with its CUI. And it uses a progress bar to show that stuff’s happening behind the scenes, and this reassures the user.

A picture says a thousand words, especially in a chat interface where you have really limited real estate.

So this 5-star system shows that user stats are used to calculate their premiums. Intelligent CUIs are moving more towards the ability to leverage data to predict what customers may want to do.

So if this is managed well, it can be a really satisfying experience for your customers. Cleo, an open banking chatbot, is proactive about helping users save money. There’s also the ability to filter search through the chat, much like a store assistant might help a customer get the right product by asking the right questions. So one thing you should always ask yourself though is, is this going to be faster than the current way that users have to self-assist? I.e. will filling a form be faster than using a chatbot? CUIs are the way of the future.

They’re fast to use, omnichannel experiences, and they’re available 24/7.

Automating low value interactions to CUI can reduce costs to customer service, and businesses will have more time and resources to focus on meaningful conversations with their customers. Customers gain back the most precious of resources, their time.

So to build a great CUI, a culture of collaboration is essential.

So take advantage of the subject matter expertise of the business, and the frontline staff of its customer service. We’re in the midst of a technological breakthrough. As designers, we are responsible for the dialogue that’s gonna help motivate users, build a connection with the business, and guide them to their desired outcomes.

So three things to remember.

One, personality is everything.

Two, empathy and support motivates the user. And three, a helpful bot increases efficiency. So by following these principles, you set yourself up for success, and you will have created a conversational UI that is engaging, helpful and efficient.

Thanks.

(audience clapping) (lively music)

Join the conversation!

Your email address will not be published. Required fields are marked *

No comment yet.