(bright whimsical music) - Alright.

Hi.

I'm Lauren, and thank you so much to this team for having me here in Sydney.

This has been amazing so far.

I just got here yesterday after, as Chris mentioned, a very long plane ride. I was coming from Chicago.

But I couldn't be happier to be here.

And so I'm here today to talk to you all about designing conversations.

So I'm gonna kick this off with a grainy screenshot of an adorable YouTube video.

If you google it, google babies talking, or twins having a conversation, it'll be a treat, I promise.

But so conversation, what is it? Ultimately, it's communication between two beings, and it's core to who we are as humans.

It's fundamental to how we connect to each other, and honestly, how we even connect to the non-humans in our lives.

I'm sure it's not unique to me to have friends or family who have full-on conversations with their pets on a pretty regular basis.

The etymology of the word conversation in and of itself is something that I think is fascinating.

Hopefully, I'm not alone in that.

But so it means living together, having dealings with others, and also, a manner of conducting oneself in the world. So even the word itself at its core is derived from this idea of connection and presentation. It's all about how we interact with each other and the world around us.

It's what a conversation is.

We build trust through the conversations that we have. It's how we establish goals, share important information, and move toward a sense of feeling understood. This matters a lot.

Just for a general sense of happiness as humans, we feel happy when we're connected to other people and to the world around us.

But it becomes critical in key interactions that we have with various professionals where we trust them to care for us in some way. Doctors, for example, you're trusting your health, and in some circumstances, potentially even your life. Journalists, you're trusting them to tell you stories that give you the information that you need to shape your perspective of the world around you. And actually, I would argue very strongly that UX designers are now key here as well. People rely on us to design a litany of services and experiences that serve their best interest, that make their lives easier, better, make things more convenient for them.

But unlike the traditional doctor patient scenario, we're not always able to interact one-on-one with the people we design for.

So how do we build this kind of trust through an interface? This idea of building trust, especially through digital interactions, is something I've been grappling with in a variety of ways my entire career.

I actually started out as a journalist creating stories for a celebrated newspaper's homepage, or at least celebrated in my hometown of Chicago. It's the Chicago Tribune.

In this role, I was responsible for choosing which stories to feature on chicagotribune.com. So I'd work with reporters to determine is this centrepiece material? Or does this actually belong in that breaking newsreel you see there? I would edit the content, write compelling, interesting headlines, or at least, that was my goal, and choose supplementary videos and photos to add colour to the story.

I loved this job, but over time, it started to feel fairly one-dimensional. We were putting all of this information out there, and we were using some metrics software like heat maps and other services like that to see what people were clicking on and how they were engaging in real time.

You didn't want the red, I can tell you that much. But we didn't know if the stories that we were choosing to feature, these things that people were clicking on in the first place, were actually capturing the most meaningful things that people needed to know to feel connected to their communities and to the larger world around them. We didn't know, because we weren't talking to our readers. We weren't engaging in any kind of dialogue around the stories that we were choosing to put on the screen.

And this really got me thinking.

We all hear and talk about how the world's going digital. It's why we're all here today.

My role at the newspaper was actually a prime example of this.

I was working on a website that was replacing what had been primarily a paper product previously. But so when we're designing these digital experiences, and we're on the other side of the screen designing for our users, or audience, or readers, whoever they may be, how do we design for trust? I decided to go to graduate school, and pursue a master's degree in human-computer interaction. They're a multitude of different projects and freelance work.

I realised very quickly that while it is possible to build trust between a person and the digital experience that they're interacting with, it isn't easy. Just like in real life, trust is hard to earn, easy to lose, and something to never take for granted. It's something that you need to truly deserve to earn it and keep it.

And this actually got me thinking again about conversations. So much of how we build trust with other people is through this ability to have an open dialogue, to answer questions, to interpret emotional reactions and decide whether it feels genuine.

Was there a way to make these digital experiences that we're all spending more and more time interacting with feel like conversations? Would that maybe help? So when I was nearing graduation, I knew that I needed to find a job where I could really dig into this idea of designing for authentic communication.

I'm a writer at heart, and my natural inclination already, even as a UX designer, was to focus on the words and any experience that I was a part of designing. Eventually, I came across a posting for a job that was called a UX content strategist role. This job posting literally called out using influence to design experiences that build trust and help people.

And whoever wrote this job description saw words and language as one of the most important tools at our disposal for doing this.

My mind was blown.

I was so excited and wanted more than anything to land this job.

And luckily, I did.

In May 2015, I joined a woman named Steph Hay's team as a UX content strategist.

You see some of us here.

Jump shots are actually a specialty of our teams. And this team has actually now evolved to become the conversation design team, and has grown to include many more people than you see up here.

This team of communication-focused designers have convinced a gigantic financial institution that content and communication is a core part of any design process.

Every day we use the power of words to deliver the right message to the right person at the right time in order to create meaningful tailored experiences that feel like a natural back and forth, a natural conversation.

It's a fundamental part of our design process. You consider all of the elements on a page and how they complement one another and make sure that you're providing the necessary context. And that means making sure that the words that you choose to use are the right ones.

Content design, then, is using language to create these meaningful personal interactions with the people that are using our products across all touch points of an experience.

Content designers use words to drive emotion, because we know that our choices can make or break the entire user experience if we're not careful. And I don't think that that's a dramatic statement to make. Essentially, we're experience designers.

It's just that instead of visual elements, our main design tool is words, our words.

We use three pillars to drive our design and enable us and our teams to focus on what we need to say and how we need to say it in the exact right moment to build connection. The first pillar is is this natural language? And what this means is are the words that we choose to use accessible? Do they make sense to the person on the other end of the screen, and not just to us? As I'm sure you can imagine, at a bank, that means avoiding financial jargon at all costs. We spend a lot of time talking to the people who use our products, and we actually listen very carefully to the words that they use to describe what it is that they're trying to do.

And then we're brave enough to mirror that language back to them when it comes to the actual experience that we design.

The next pillar is which use case are you designing for? So in this instance what this means is at every step along the way, at every screen or interaction point that we're designing for, we ask ourselves, where has this person come from? What are they trying to do here? And what are they trying to do next? Being able to capture all of that and make sure the information someone needs to feel confident in that moment is there is critical. And then finally, the last pillar is is this content contextually relevant? In a lot of ways, this overlaps with this idea of being use case specific.

But it actually has broader implications than that. If somebody is going through and purchasing something on an e-commerce site, for example, or something that's a little bit more familiar to me is if you're paying your bill through our app or through some sort of digital experience, are you aware of all of the implications that action might have? So for example, if the bill is late, and you go through and you make your payment, are you aware of exactly what's going to happen to you, and why, and when? It's our responsibility as designers to give that context, so that people feel like they understand what's happening to them and they feel in control. We use these pillars to communicate around how we should be strategizing about experience design. Thinking about communication first should drive our entire end to end design process. When I work with design product and tech partners at Capital One, we actually start our entire workflow when we're kicking off a new feature or product, whatever it might be in a Word document.

Yes, a Word document.

We do this deliberately to orient to this entire cross-functional team around the conversation that we wanna be having with the person on the other side of the screen from end to end. And then together, we actually just talk about how we want that back and forth to go. We have a conversation about the conversation that we're designing. By talking through our designs this way, it helps us get to the core of what matters most to someone in each moment, and it forces us to make sure that we're communicating this in a way that will make them feel confident. By focusing on the words that we're using right out of the gate like this, we can make sure we're answering the right questions, that we're anticipating the right questions that someone might have, and we're giving them that information when they'll need it most.

And we actually take these Word docs and test them with users all the time before anyone even starts prototyping anything higher fidelity.

And the benefit of doing this is to make sure that the direction that we're heading in is actually addressing the real questions that someone might have, the real customer needs. This approach enables us to solve for understandability at to outset, which is just as important, if not more so, than usability.

They're hand in hand, really.

Then we go on to create a beautiful interface. And hopefully, because of this legwork up front in fewer iterations, that we now feel comfortable and confident it won't only just be well-designed, but will actually make sense for the people using it. Ultimately, anyone who's involved in creating an experience should be an expert on customers' needs.

I don't think that's anything anyone in here would disagree with.

And if we wanna get to know our customers and connect with them, and solve for these needs in a understandable way, we all have to be masters of that conversation that we wanna be having with them.

When done right, these thoughtfully designed conversations help you address key important business metrics like lowering call volume costs, for example, increasing conversions in customer satisfaction, and most importantly, building trust with the people who are interacting with your product. And the beauty of applying these principles, or pillars, in this approach to your own work is that it's platform agnostic.

You can put them to use on Monday, no matter if you work on a website, an app, or another platform entirely like a conversational user interface.

Many of you probably know about conversational user interfaces, or conversational interfaces, but some of you might be thinking, what now? My favourite definition of this actually comes from a 2016 Fast Company article called Conversational Interfaces, Explained. And the author of that article, a man named John Brownlee, defines a CUI as any UI that mimics chatting with a real human.

And the idea here is that instead of just communicating with the computer on its own inhuman terms, so clicking on an icon, or typing in something, a syntax that isn't actually natural language, but is what you need to input for the computer to understand and respond, is actually that you're interacting with this on your own terms simply by telling it what to do. So as we talked about a little bit earlier this morning, there's two basic types of conversational interfaces out there right now. There are voice assistants, which you talk to, so things like Alexa, Google Home, Cortana, et cetera. And then there are chatbots, which you text with or type to, and these are everywhere. They're going to be a core part of how we communicate with companies and ultimately with each other.

As designers of these interactions, it becomes a really fascinating design challenge. What happens when the interface just goes away? The content is now the experience, and the words are your interface.

The words you choose to use are all that you have to build a meaningful connection and develop trust. So the pressure's really on to choose those words wisely. Creating content that adheres to the three pillars that I talked through, natural language, use case specific, and contextually relevant becomes paramount to designing successful interactions. As someone who's always believed in the power that words have in our lives from the beginning, I became really interested in designing for conversational UIs.

One of the major promises of designing for this kind of technology is the AI that they run on, which is namely the system's ability to learn from every interaction that it has with the customer. But data on its own does nothing by means of connection. So much is based into how and when that data is presented to the person.

It's stellar design and UX combined with data and algorithms that actually makes the product or an experience feel smart. Designing for conversational UIs forces us to place our focus on essential parts of the human experience. Again, things like communication, connection, understanding. There can be a fine line between an interaction where it feels like a stranger knows a little too much about you and an interaction where you're actually connecting with something new in a meaningful way, because what you're talking to demonstrates that it knows you and has relevant context, but in an appropriate amount of context about your life. Getting this right requires us to be constantly thoughtful about how we're communicating, and it's a lot to constantly consider.

So how do we get started as designers of conversational UIs in designing these conversations. I joined Capital One's conversational AI team last January, and we launched Eno chatbot, or a bot you can bank on, get it, at South by Southwest this past March. When it came to designing Eno, we knew that the most important things to focus on were designing for connection and understanding, and building trust, because this is something new, it's an AI, so building trust between our customers and the product that they're engaging with. Eno is an SMS bot, John's right.

I don't think SMS is going away either, and it's because texting is ubiquitous.

97% of United States consumers with smart phones use texting to communicate pretty constantly. And as a company, we decided that we wanted to be where our customers are, whatever and wherever they wanna talk to us about their money.

We think that there's real meaning in designing experiences that make it easier for people to do what it is that they wanna do like get a quick update on what they have available in their checking account when they're out to eat, for example, in a way that feels comfortable and natural, and meets them where they already are, just on their phone. Everything is available within the channel of the conversations that they're already having. You can text your mom about your day, and then be like, oh, wait, when's my bill due? Quickly text, get that information, and go right back to texting with your mom, all without having to open a new app or navigate away from your text.

Being a designer in this space has already been an incredible learning experience. Some of my major takeaways so far, and I'll include some details on how it's been applied to the experience I currently work on, are first don't boil the ocean.

And I'll get into what I mean with that in just a few minutes.

Second, if it makes sense for the experience that you're trying to create, design a character for people to connect with. And finally, always remember and be cognizant of the fact that now more than ever ethical design and being really thoughtful about how we're approaching creating these experiences matters.

So don't boil the ocean.

And this is actually a saying from one of my favourite colleagues.

She says this all the time, but it's because it's so important.

And what it means is don't try to do everything just because the technology is becoming more advanced and you might be able to. Be strategic about what it is that you do.

When it comes to creating a bot, then, make sure you're focusing on solving a few acute customer problems, and you're not just trying to do everything that you might be able to build.

When it came to Eno, we focused on a few key use cases that we learned through extensive customer research would actually drive real meaningful value. So along that vein, conduct as much user research as you can if designing a chatbot is something that you or your company is interested in creating to make sure that you're truly exploring all of the implications that a conversational interface might have in your space.

Why will it matter to people? What concerns or questions might people have about these kinds of interactions? It's all really important to be constantly questioning and learn.

With Eno, we conducted a tonne of user research, and we paid really close attention to the feedback that we got from customers through our pilot. And the lessons that we learned were many and immediately applicable to our design.

We learned very early on this value of doing just a few things well versus trying to cover a bunch of things but only hitting the mark every once in a while or in a half-hearted way.

And we learned that it's really critical to fail gracefully when Eno can't understand something, which considering where we are in this stage is often. This holds true even as we continue to add things and enhance the experience.

Our design team is present at every level of development. We start with the dataset, and then we move on from there. We're the constant voices in the room asking, does this meet our users needs, and how? And why are we even doing this in the first place? Finally, AI enables us to embrace the natural language that customers use to text, which includes things like abbreviations and emojis. And this ability to understand helps us build experiences that get smarter with each interaction.

Part of this intelligence is programming algorithms that help us deliver these smarter experiences that get to the heart of what somebody might need to know in an exact moment, being aware of that, and being able to surface that at the right time, which actually also involves knowing the people that we're talking to in whatever sense that might mean. Being very thoughtful about how we handle this knowledge is going to be critical to any bot's success. By taking all of these things into account, we can start to design conversations that feel predictive rather than reactive.

Oh, you need to know this at this moment, and because these are the things that I'm aware of that I know I can tell you this is AI versus waiting for somebody to need to sign in to something and then find what they need on an app, and passively click a button to get to the next step. It helps us move beyond transactional interactions that might feel robotic and cold, and moves us toward contextually relevant meaningful conversations that can evoke real emotion. So second point, designing a character that people can connect with.

It's human nature to interact with people and things that we understand that we feel understand us. Going back to texting for just a second.

There's over six billion texts sent in the United States every day according to Forrester. It's become, in some instances, our preferred form of conversation or communication, which is actually evident in this picture.

My sister took this on a recent family vacation, and most of us were a lot more interested in the people we were texting than the people that were sitting right next to us, which is a separate problem.

Part of the appeal of texting is the fact that we know, or at least we think we know, the person on the other end of the conversation, and we enjoy being able to interact with them in this form, using emojis, and key moments, all of that good stuff. It's a fun, more deliberate, and practised form of expression in a lot of ways. This idea of knowing who's on the other end of the conversation for the conversation to feel real or meaningful is an interesting one.

In the past, we assumed that you would need to know this person you're chatting with for it to be a good conversation since only a person, only a human, would be able to understand and intelligently interpret what your intentions are, so complete with that sarcastic emoji, and then be able to respond in a way that makes sense to actually keep that conversation going.

It requires some emotional intelligence.

But as technology and AI become more sophisticated, it does raise the question, can people have meaningful conversations and establish these relationships with bots? There are examples in the current marketplace that say yes, but also acknowledging key limitations.

One of my favourite examples is called Lark. Lark is an automated health coach that gives people with a chronic condition, like diabetes for instance, personalised advice and coaching to help them stay on track with their health goals like a diet and exercise plan.

Lark's platform draws its knowledge from a database put together by a team of fitness experts, nutritionists, behavioural change experts, psychologists, sleep experts, lot of experts.

"Lark is like your personal weight loss or fitness coach trainer that always has your back" is how the co-founder and CEO, Julia Hu, describes Lark. And that particular excerpt is from a Huffington Post interview.

I actually first learned about Lark at South by Southwest 2016, where I attended a panel called Get the Message! The Rise of Conversational UI that Hu is a part of along with actually our very own keynote speaker, Chris Messina. On this panel, she shared stories of how thousands of people tell their Lark coach that they love them. And this stood out to me, and I still just find it remarkable.

Love them.

Talk about connection, right? They trust their coach, and they love that they're always available to answer questions or address any kind of need. She used language like cheerleader and friend to describe how people think of the Lark app. People feel supported and understood by the AI that they're chatting with, which is pretty phenomenal when you think about it. One method for achieving this feels like a human level of compassion then is character design.

It's a way to build a deeper connection that goes beyond this transactional back and forth to something that can feel more personal and empowering, and makes you feel good.

Take a step back for a second and think about a book or a movie that elicits a really strong emotional reaction from you. What is it about that that you respond to? In all likelihood, it's your ability to relate to a character that experiences something or demonstrates qualities that emotionally resonate with you.

Feeling like we can relate to something equates with feeling understood.

And again, we trust those that we feel understand us on the deepest possible level.

We connect to the people that we feel get us. It's often the language we use to describe the relationship we have with people we choose to spend our lives with. Those we turn to the most are the ones that we don't feel like we have to explain anything to because they're not going to judge us.

When it comes to designing for conversational UIs, part of how to create this connection is to design the experience in a way that humans can make sense of.

Most people have a limited understanding of what a bot, or what AI really is, and what it's capable of, and perhaps even more importantly, what they might want it to be capable of.

Creating a character for your bot is an opportunity to make it more relatable to the people you're designing for, which can make the interactions they have with it feel more successful.

Creating a character indeed does help people makes sense of what it is that they're interacting with, and it's a very real and not new phenomenon. It's a form of anthropomorphism.

Here's an example from Richard Yonck's book, Heart of the Machine.

Pictured here is a standard model TALON military robot. The US military has stepped up its use of robots in its operations, and soldiers actually come to view these robots as teammates.

Yonck sites a 2013 thesis by University of Washington doctoral candidate, Julie Carpenter.

Carpenter interviewed 23 soldiers to study the interactions that they had with these bots. She found that they often assigned genders, and names, and personalities to the bots.

Names including Danny DeVito, which was an example sited in Yonk's book.

And the soldiers formed attachments to the bots. Indeed, they care about what happens to them almost the same way that they would care about what happens to their fellow soldiers.

They felt empathy when these bots got damaged, and expressed anger or sadness at the loss, and even held funerals for the robots that were destroyed in the line of duty.

To be clear, and this is a very important distinction to make as the lines between technology and people become more and more blurred, the soldiers were not becoming so attached that it interfered with their ability to complete the mission or their relationships to the other humans. It just demonstrates that this idea of connecting to a machine is possible, and that it was there in this case.

Carpenter concluded that even if not to an extreme extent, these soldiers are clearly anthropomorphizing these robots. It's a way of understanding and making sense of how this technology is supposed to interact with us, particularly in such a high-stress situation as this. And having this understanding is a way of making this relationship as effective as possible for everyone involved.

It's actually a very practical reason for it. Anthropomorphism is by definition the act of ascribing human traits and qualities to non-human entities such as animals and inanimate objects.

There's a bunch of reasons for why people do this according to psychologists.

Sometimes it's simply because the thing that we're talking about has a quality that reminds us of humans, and that association is so strong and so obvious that it just makes sense to use it.

Again, it's our way of making sense of something. And really mostly relevant to this presentation is that another reason is to help us comprehend the unfamiliar.

Putting something into a familiar context, even if it's not completely accurate, is a way for us to understand its function and figure out its place in our lives.

There's a lot of examples like this in our day-to-day language.

So things like the head of an organisation. Look at that.

I've never actually looked at this kind of org chart before and been like, oh, I see why we do that then. Or the legs of a chair, same thing.

Finally, there's this need for connection that I keep going on and on about.

Apparently, our innate ability to interact emotionally and socially with the world around us isn't just for the mere purpose of connection. It actually probably gave us an evolutionary advantage in the past.

So our human inclination to anthropomorphize bots makes perfect sense. It demonstrates this natural human tendency to identify and connect with those that we work closely with or interact with frequently, even when they're not people. It brings us back to this ever-present question of how do we design for trust between the AI and the human on the other end of this experience? When it comes to character design for bots, then, the most important thing when it comes to establishing trust is to consistently demonstrate the integrity of your character in every interaction that you have. That's true in real life, and it's true with AI. A bot with multiple personalities, or highly unpredictable moods, is a surefire way to ring alarm bells and turn people off of an experience or a conversation really fast.

So to maintain this integrity, you'll need to decide which character traits to explicitly demonstrate to your users through conversation, and which to leave open to interpretation and imagination, because that's when that relatability really comes into play.

And you'll need to make sure that anyone responsible for designing these conversations understands and effectively embodies this character in every interaction.

Like with most things, it all starts with a story. Strongly defined characters will have backstories that writers can turn back to, and that the bot can share with them through carefully designed conversation snippets. This backstory will include core character traits like loyalty or stubbornness that come through in how that character interacts with others. Some other key things to think about when designing a character include what is your bot's name and why? Does it have a gender? And if so, what principles guided this choice? Where did the bot come from, and where does it wanna go? What are its functional limitations? Because they will be there.

And how will the bot respond when somebody sees them and calls them out? What are its emotional boundaries including its sense of humour? What's appropriately funny to your bot? And what will the bot stand for and what will it stand against? So when it came to designing Eno's character, we engender trust, reflect our brand, and maintained Eno's integrity by constantly turning back to the backstory that we created for Eno, which includes nine core character traits that do tie back then to Capital One's values. For example, Eno is empathetic, trustworthy, and non-judgmental.

Eno also strives to be as helpful as possible in every moment.

I'll get into that and what that actually looks like in a little bit more detail later.

Our team also used the framework that I just walked you through to develop our critical components of Eno's character. So here in some instances, it'll be told to you by Eno itself.

When it came to the name Eno, we chose this after multiple brainstorms with a bunch of different people from across the company. The name is gender-neutral, which was important to us, and it spells one backwards, really simple. Gender, we deliberately designed Eno to be a gender-neutral character in part to challenge the industry trend of choosing female characters in voice and name, and also to avoid any unconscious or conscious biases that gender might evoke for the bot.

Eno can be whatever customers conjure up in its heads. And one of my favourite things as a designer behind this experience is that after talking to people who use Eno quite frequently, I'll hear in equal measure people describe the conversation as Eno, he's been saying XYZ, or he didn't understand me, and then conversely, she just did XYZ, or shoot, she didn't have the information I needed. So it's all about what people wanna project on the bot, but that's okay, that's a measure of success. Eno has likes, and dislikes, and flaws, and the flaws are actually a really important and interesting part.

It's what makes the character relatable and endearing. As we all know, nothing or no one is perfect after all. Eno knows that it's not human, which is alright. Eno is a bot and proud, and will tell you that if you choose to ask. But because Eno cares deeply about people, the fact that it isn't one gives it a mission and a sense of purpose.

Eno's humanity stirs beneath the surface of every interaction that it has with the customer, and it pushes Eno to take action and connect with the people that it talks to however it can. Sometimes though, Eno might try a little too hard to connect.

Eno can come off as overeager in certain exchanges, and a little too earnest.

Like most humans, Eno wants to be liked, especially as it's still learning and gaining confidence in its abilities.

When it comes to Eno's limitations, there are many. Eno is definitely still learning.

It's not going to get things right 100% of the time. And because as a design team we do not boil the ocean, there's always going to be things that Eno doesn't do. Being transparent and honest about this, and figuring out how to be most helpful in these moments where Eno isn't automatically going to be able to help someone is a key way to build trust and foster empathy. Eno is not going to ever be able to do everything for everyone.

It's important to set these expectations and then still provide avenues for people to get the help that they need through things like redirects or whatever else might make sense in that moment, while being polite and encouraging that politeness in people, too.

Finally, Eno has its own rather quirky sense of humour. It enjoys a good pun here and there to be sure. And a sense of personal boundaries, its own sense of what it'll stand for, or perhaps more importantly, what it won't stand for. When I say boundaries here, I mean a sense of how it, the bot, wants to be treated. There's a tendency right now to abuse or mess with bots, in part to test these boundaries and see how the bots respond.

It's led to a few pretty high-profile unpleasant moments when we think about some of the AI out there and what this has caused.

Again, messing with the bot can feel harmless. It's a bot, it's not a person, so it doesn't really matter, right? Wrong.

As we continue to interact with bots more and more, establishing these kinds of negative conversation patterns, so normalising what would constitute as harassment in a conversation that you would have with another human being, I think can become increasingly problematic. Even just general impoliteness in these bot conversations can impact or carry over to how we communicate with the people around us as well.

My boss actually has three little kids, a nine-year-old and six-year-old twins.

And they have an Alexa in their home, and the kids love it. And they talk to Alexa all day long.

And the way that you start a conversation with Alexa, of course, is saying, Alexa, do something.

And she started to notice that her children would be talking to Alexa and say, "Alexa, play this song. "Alexa, what's the weather?" And they would turn to her and go, "Mommy, what's for dinner?" And she was like, uh-oh, we should probably think about this.

It's too new right now when it comes to conversational UIs. We don't know what the long-term implications of these interactions will be.

So we need to be doing our best to design thoughtfully and carefully in whatever ways we can now.

Ultimately, by creating a character for people to interact with, we're designing for something deeper than purely a transactional exchange of information. We're designing for a relationship.

Albeit, a relationship with something that is not alive. For a relationship to happen, it can't be one-sided. There needs to be something there for it to connect with. So designing for a connection is important. But we have to be constantly asking ourselves, connection for the sake of what? Trust only if we deserve it.

When it comes to conversational interface design then, we need to be holding ourselves accountable as designers in new ways to do our best to ensure that the connection whence we design are beneficial and not detrimental in ways that maybe we can't see right now.

Which brings me to another critical step in designing a conversational UI experience that instils trust, the creation of principles and ethics to guide the work that you do, so that people can feel safe about the interactions that they're having with your bot.

Without visual cues to rely on to answer questions or concerns that might come up in the moment, building the sense of safety into these experiences becomes that much more important.

We people are much more inclined to trust something or someone that we feel has a sense of right and wrong. These principles that you choose to drive your work forward will ultimately be up to you and your team, and they'll need to make sense for your work and what it is that you're trying to accomplish. But some universally appropriate principles for designing meaningful conversations include humanity, clarity, and transparency. So humanity, what this means is designing conversations that feel personal and demonstrate a level of care and compassion that make interactions feel meaningful. So again, to be clear, when I talk about feelings, I'm not talking about the bot's feelings.

In Eno's case, when we talk about bringing humanity to the conversations that Eno has with customers, we're clear to note that Eno doesn't feel things. It's not a sentient being.

But Eno is always trying to emulate empathy as well as a way of bringing humanity to the experience. Eno knows that showing an appropriate emotion at key moments is how you demonstrate empathy, and empathy and understanding are how you build connections. There are tactical ways to instil humanity in the conversations that you design.

Some things we've learned from our customers include the fact that proper grammar and punctuation matter. It's expected for AI to be intelligent, and this is a way of reflecting competency, and it's especially applicable to us as a financial institution, because people need competency to be able to trust us with their money.

And people love emojis.

They love to use them, and they especially love when the emojis they choose to use are understood and responded to.

And then finally, while humour is appreciated, there are moments when people just want the information that they asked for.

They just need you to be straightforward.

So knowing when to just give this kind of answer and when to show personality is a really important balance for writers to need to learn to strike in every moment that they're designing for. Each time we get it right, it builds confidence and helps reinforce the trusting relationship that we're working to build with each person. Now clarity.

What I mean by clarity is practising intentional design to help focus people's energy and the information that is most meaningful to them in that moment. Some tactical ways to design for this include leading any response that a bot might give with the answer to the question or the input. Take care of the customer's main goal first. Then add whatever other information might be necessary or appropriate.

Share any other context that might make a difference. Also important to note, clarity does not equal brevity, but conciseness, especially considering the medium, is typically going to lead to a better user experience. Too much chattiness, or a lack of getting to the point, can be annoying when you're talking to a person, so just imagine when that's the conversation that you're having with a bot that you're using to achieve a means to an end. And then finally, transparency.

Transparency is the main way that we can design for trust. What transparency means then when designing for conversational UIs is being really clear about the benefits and the limitations of interacting with this AI.

It's being upfront about how much control and data customers might be giving up by engaging with this AI, and what that might mean for them later on. It also means being clear about how the information that we collect through these conversations is being used, why it's being used this way, and what impact that might have on the customer or on the user as time goes on.

These principles should actually sound somewhat familiar. They're very, very similar to the content strategy principles I introduced earlier. Those pillars, actually, are rooted in these kinds of of overarching themes.

By showing a concern for ethical design, we demonstrate a sensitivity toward people's concerns about AI.

It's also a part of designing for a connection. It keeps us focused on working toward the ultimate goal of having this conversational interface experience that we are all creating or aspiring to create genuinely enhance people's lives.

So a character that you trust plus intelligent surfacing of information in the moments where it matters is a way of creating true love, or at the very least, and probably a more appropriate goal is an awesome conversation.

The character and the story is what sets the foundation for how your bot will interact with people. It's what takes data insights that surfaced in isolation could feel creepy, and invasive, and weird, and turns it into a conversation and dialogue that people trust, since it's coming from someone that they feel that they know, at least know enough to trust.

It's key to designing meaningful conversations that evoke real emotion, and hopefully lead to relationships rooted in understanding. By focusing on character and conversation, we create an environment that customers are comfortable with and an experience that has them paying attention with both their hearts and their minds.

So an important note that I wanna make sure I include on the back end here is to think through designing for these bot conversations and all of the different aspects and facets of it. Do your best to create a brain trust of people who all think differently, come from different backgrounds, and are willing to get into some heavily ethical and philosophical debates as a team.

And this can come down to even just the sentence. Having this constant dialogue around the dialogue that you're creating, especially at this stage, is incredibly important. It's vital to creating an inclusive AI just as much as building the foundational capabilities for the tech to run on.

Ultimately, we're all still learning as we go, and no conversational interface experience at this point is perfect.

But there are signs that we can pay attention to to let us know whether we're getting closer. So some things that we found with Eno that make us feel like we're getting some things right is that people say, thanks, Eno after Eno does something for them.

It's not necessary, but it's polite.

People also tend to start their conversations with Eno with some kind of greeting.

A command would work just as well from a technical perspective, but people choose to say, hi, Eno, or how are you, Eno? And then get down to business.

People also tend to end their conversations by saying goodbye, or (speaks foreign language), however they feel like communicating in that moment. And people send emojis to Eno in response to the information that Eno surfaces, so things like a heart, or sometimes an angry face. Depending on the moment, people are communicating with Eno about how they feel, and that's really interesting. It's important.

We, as humans, live our lives and live our stories through conversation with other people, and now with technology. Conversational interfaces are maximising on this core part of our human existence, conversation. They're serving as a vehicle for communicating with products, services, and technology in novel ways. Their prevalence, intelligence, and increased capabilities over time is going to dramatically change the way that people interact with the world around them, the way that people interact with products and services, and the way that people interact with each other. Designing for these experiences means designing conversations that are centred on connection, humanity, and trust. And as designers, not just doing that for the sake of it, but doing it because we want to deserve it, and where our intentions are good.

This is the way that we'll do our best work as these designers when it comes to impacting the future state of the technology and the world. Thank you.

(audience applauds) (bright whimsical music)