Designing Conversations

As designers, we all know that creating a product or experience that people know how (and actually want!) to engage with is everything. With GUIs, we can design visual cues that assist with usability and solve for understandability. But, how do we design for conversational UIs, when the content is the experience, and words are the interface?In this session, Lauren will talk about how to design contextually relevant conversations for bots that evoke emotion and lead to relationships rooted in trust, empathy, and understanding.

How do we build trust with chat-based UI?

Lauren works on the ‘conversation design’ team at Capital One, focused on using words to deliver the right message, to the right person, at the right time. To create a natural conversation. A challenge in a big financial institution!

Content IS design.

Lauren’s team uses three pillars:

  1. Is this natural language? Does the language make sense? Is it accessible?
  2. Which use case are you designing for? Where did the person come from, where are they going, what problem are they trying to solve?
  3. Is this content relevant to its context? Are you aware of the implications of the action at hand – eg. if you’re paying a bill, you need to know if there is an impact like a late fee.

The team kicks off new projects using Word docs (“yes… Word docs!”). It helps track changes and enable collaboration (mostly by talking to each other). Then the content of those Word docs is user tested – early, light testing. Only then do they make a real UI.

The beaty of the three pillars is that they are platform-agnostic. Desktop, mobile or conversational interfaces (anything that mimics chatting with a human).

Chat bots need great UX combined with data and UI. None of these pieces alone create a great experience. Get the balance wrong and things will feel wrong.

So how do you get started designing a conversational UI?

When they built the chat bot “Eno” they found it was important to go to the user… meaning SMS! Yes it’s not going away.

Have empathy, think about what people want. Don’t boil the ocean – start with the pieces that drive meaningful value. What concerns do people have? Constantly take on user feedback and learn as you go.

Screenshot of Eno text messages

At Capital One they found the greatest value was to do a few things well, not a lot of things poorly. Fail gracefully when your bot can’t do something – because it’s going to happen a lot, particularly when it’s new! Understand how your customer speaks. Yes, it needs to deal with emoji. Yes, it should respond if you say something nice.

With AI you can make conversations that feel predictive instead of reactive. “You might need to know this right now” instead of “tell me x”.

Design with a character people can connect with.

The sustained appeal of SMS/texting is the connection with the person on the other end. It’s fun and deliberate. You know who’s on the other end. This can make SMS powerful for things like fitness coaches or bots to help people quit smoking. Users of the Lark app ‘trust and love’ their coaches!

So why do people respond to characters? eg. Wall-e? When we feel we can relate to something, we feel understood as well. They ‘get us’. In life we turn to people who know us the best, who are less likely to judge.

Creating a character for your bot gives you an opportunity to make it more relatable. It’s not a new thing either – it’s anthropomorphism (ascribing human traits to non-human entities).

People get attached to anthropomorphised things. Many soldiers are upset (angry or sad) when their bomb disposal robots are damaged or ‘killed’. Not to the point of impacting their ability to do their duty; but they were attached to their bots.

How might we achieve a true connection between our AI and the customer? We can use the natural human tendency to anthropomorphise bots. This helps build trust… but also makes it critical that your bot is predictable and reassuring. People are turned off very quickly when it gets it wrong.

Decide which character traits to explicitly show in interactions; and leave the rest to the user’s imagination. They can fill in the blanks in ways they find comfortable.

It all starts with a back story. Characters have back stories to set the scene, to explain their personality, motivations and responses. What are your bots emotional boundaries? What will stand for and against?

With Eno, they intentionally chose a gender-neutral identity (it’s a bot!) and it responds to questions like ‘where do you live’ with honest answers. It does not pretend to be a human, it’s happy to say it’s a bot that lives in the cloud. Its humanity doesn’t come from pretending to be male or female, it comes from its interactions.

Sometimes Eno can be a bit too eager, too keen to be liked. They need to keep improving it.

Eno doesn’t do everything, when it can’t do something it simply tells people who can or where to go to get started.

Eno has boundaries – when to speak or listen, when to ask or tell, when to open or close, when to act or wait. People tend to try to mess with a bot, to see what works. People can be impolite, aggressive, even creepy… it’s important to set a better tone. Kids used to barking “alexa, tell me the weather” need to be reminded not to talk to their parents the same way!

Ethical design matters.

Why are we designing for connection? To instill trust? Do we deserve that trust? Do people feel safe when interacting with your bot?

Use guiding principles:

  1. humanity – demonstrate empathy with appropriate emotion. Use proper grammar even if you accept emoji back, because competency is very important for trust in a bank.
  2. clarity – keep people focused on the important information, put the answer first. Be concise first, playful second (when appropriate).
  3. transparency – be clear about the benefits and limits of interacting with the AI, be up front about where the data goes and how it’s used.

It helps to have a team that is diverse, so they bring different perspectives to the debates required for designing an inclusive AI. Build the strongest, most diverse brains trust you can!

Do watch out for indications that people are liking your bot. They don’t need to say ‘thanks’ to a bot. People also often start an interaction by asking how the bot is feeling. It should respond naturally.



Q: How does Eno respond to people who are rude or offensive?
A: They’re constantly exploring ways to deal with it. It currently responds very strongly against sexual harrassment – it will shut down to pure informational transaction, emphasising that it’s not ok.

Q: Is there a limit to how big or long an interaction can get with a bot? eg. completing a full form?
A: There are a lot of limits with SMS, both social and technical. So there is definitely a limit, they try to keep things short; and give the customer short answers like yes/no.

Q: How do you gain empathy, given most people start with a negative attitude to a bank in the first place?
A: This came out of the research that led to creating a chat bot. Creating the character was in part a way to change that overall negativity towards banks.