Design for Cognitive Bias: Using Mental Shortcuts for Good Instead of Evil

Users’ minds take shortcuts to get through the day. Usually they’re harmless. Even helpful. But what happens when they’re not? In this talk, based on my book from A Book Apart, I’ll use real-world examples to identify some particularly nasty biases that frequently lead users to make bad decisions. I’ll then talk about some content strategy and design choices we can use in our apps, designs, and platforms to redirect or eliminate the impact of those biases. Finally, I’ll explore our own biases as designers and some methods to prevent our own blind spots from hurting users.

Design for Cognitive Bias: Using mental shortcuts for good instead of evil.

David Dylan Thomas: Author, speaker, filmmaker.

Keywords: Cognitive biases, illusion of control, framing effect, cognitive fluency, tools, design

TL;DR: David walks us through a number of examples of cognitive biases and how they drive outcomes subconsciously, even at times when we are aware we have them. Being aware of our own biases takes work and by virtue of the fact that they are biases, tricky to explicate. David walks us through a set of tools and practices available to assist us with identifying our biases before they cause problems, as well as examples of companies and organizations who are using these tools toward harnessing our biases toward asking more effective questions and enabling more inclusive design.

David is a Content Strategy Advocate at Think Company, an experience design firm based in Philadelphia. He wrote a book called Design for Cognitive Bias The road to the book was a podcast he hosts called The Cognitive Bias Podcast and the road to the podcast began with a talk by Iris Bohent called Gender Equality by Design.

Bohent talks about how a lot of implicit gender or racial bias boils down to pattern recognition. Ex:Imagine you’re hiring a web developer: The image you may think of is ‘skinny white guy’, not because you actually believe men are better programmers, but patterns set up across your lifespan from media, work history, culture, that cause brain bias. This may lead you to dismiss names on top of a resume that do not fit that profile.

When David recognized that something as terrible as racial or gender bias could boil down to something as simple and human as pattern recognition, he set out to learn all he could about cognitive bias.

Began with the rational wiki page for cognitive biases which lists over one hundred biases, so he took one per day to explore, which led to the podcast and book.

It’s worth establishing from the jump what cognitive bias is. Essentially, it’s a series of shortcuts our minds take that often help, but sometimes hurt. We have to make thousands of split-second decisions a day and if we stopped to think about each, we’d never get anything done. It’s a helpful trait our brain has which allows us to live much of our lives on autopilot. But autopilot can be wrong, and we call these errors cognitive biases.Let’s explore some:

Illusion of control: We’ll start with a fun example.Say you are playing a die rolling game. If you are looking for a high number, you roll hard, for a low number, you roll soft. We know this makes no difference but we like the illusion of control.

Confirmation bias: A less harmless one. Think back to the Iraq war and weapons of mass destruction. These were the premise for going in, none were found, but even then many people still believed (and do to this day) that they were there.

These biases are extremely difficult to combat. You may not know you have them (bias blind spot). 95% of cognition happens below the threshold of conscious thought so even though you’re making decisions, you may not realize why.

Even if you know about the bias, you will likely still do it. Ex: Anchoring bias: If David asks you for the last two digits of your phone number and then asks you to bid on an auction item, those with lower numbers will bid lower, and vice versa. He can tell us this even before running the experiment and we would still do it.

The good news! There are content and design choices we can make that can help keep harmful cognitive biases in check – or even use them for good.

Returning to the skinny white guy: Multiple experiments show that if shown identical resumes where the only difference is male vs female, males get selected. What about the name is helping you the hiring manager?

Analogous to a Signal vs Noise problem. The signal which actually helps you make the decision are things like qualifications and experience. The noise that gets in the way may be gender or race, things that you are reading into the name.

Ex: The City of Philadelphia did an anonymized round of hiring for a web developer and found two things: Even in the tech world they found that first, the best method for anonymizing is to have an intern with no stake in the hiring process print the resume and redact the name with a marker. Second, even if recruiters liked a resume, the next step would be to look at the developer’s portfolio at which point the profile information would be visible, rendering the experiment useless

To resolve this, they created a Chrome plugin to anonymize that information as it loaded. (FYI they also took this code and put it back on GitHub, you can find it there now if you ever want to try this technique).

Cognitive Fluency: This is the idea that if something looks like it will be easy to read we’ll assume whatever its talking about will be easy to do, and vice versa.

David’s into pancakes at the moment! Ex: pix of two pancake recipes with poorly formatted and detailed written text alongside the same text simplified and accompanied with visual images showing the necessary steps. The latter is more likely to make you feel like you can make the pancakes.

Leveling up – if you put this information into a two minute how-to video, we’re for suresies making pancakes!

Another ex: Do you want to drive to town or take transit? Ex: look at picture of printed, difficult to read, detailed train schedule alongside screen grab of the slice of needed information in the app view. The app view is much simpler to understand and therefore more likely to induce user to take transit.

Similarly, If people are shown two almost identical written statements ex: ‘Rosa Parks was born in 1912’ and ‘Rosa Parks was born in 1914’ but one is written in slightly larger and coloured font, when asked which statement is true (sidebar, both are actually wrong – it was 1913) people typically choose the easier to read answer as they assume this is true. Worse, if something rhymes (An apple a day keeps the doctor away we also think it is more true.

This has consequences. We love certainty and things that are easy to process feel more certain. For e.g if asked for a memory when you were five, you may not remember or feel certain, but if asked what you had for breakfast, it’s easy to recall. Things that rhyme or use plain language, clear fonts or imagery are easier to process and feel more certain.

This becomes important when discussing things people need to believe. America has a crisis where African Americans generally do not believe health information that comes from the government. A 2002 survey showed only 37% of African American respondents endorsed the statement that The government usually tells the truth about major health issues like HIV/AIDS In 2016, that number had dropped to 18%. We can do another whole podcast on why there are many legitimate reasons African-Americans have concerns about government issued health information, but the fact remains that this information can save lives. If it needs to rhyme or use plain language so be it.

When David originally included this information in his book, his editor challenged him on this: This is an interesting idea, but can you point to examples where it has played out?Short answer, yes.

This study showed that pregnant smokers and ex-smokers who received an intervention with materials written at a third grade reading level were more likely to achieve abstinence than those who received standard materials. A similar study showed that a plain language, pictogram-based intervention as part of medication counseling resulted in decreased dosing errors and improved adherence. among multiethnic, low socioeconomic status caregivers in helping administer medicines.

Click it or ticket legislation. Wear a seatbelt or get fined. Older drivers adhered, younger did not. When re-branded as click it or ticket, seat belt use in youth improved dramatically. To put this in human terms, for every percentage point you go up in seat belt use, 270 lives are saved. Extrapolating the math, this means 4000 lives are saved in part through rhyming!

The Framing Effect: In David’s view this is the most dangerous bias in the world. Ex: You’re at the store. One package says ‘Beef, 95 percent lean’ and another says ‘Beef, 5 percent fat.’ People choose the lean option. If you change the stakes: ‘Should we go to war in April or should we go to war in May?’ it’s much more problematic. We are not discussing whether to go to war in the first place, just when. Wars have been started over less.

If you are multilingual, you have a secret weapon against the framing effect. If you can think about the decision in your non-primary language, you are less likely to fall for the trap.

The framing effect can also be used for good: If you show an audience an image of an older person driving a car and ask Should this person drive this car?, you will get a policy discussion. This will demonstrate only on what side various audience members are on. But show the same image with the question How might this person drive this car? and you will get a design discussion. You will learn several workable methods and solutions. Changing the words changes the frame, which changes the entire conversation. Zooming farther out, what if we ask How might we do a better job of moving people around? This brings in broader conversations around public transport.

What about our own cognitive biases? These are the ones that can get our users in trouble. David used to have this idea of the Scientific Method: I have an idea. I’m going to test that idea, write down what I did, a bunch of other people can try it, and if they get the same result it’s true, let’s make it a law and move on. More to it than that.

Instead… I have an idea… Do all of the above steps but when you get to the point that you think you have proven something, then spend your time trying to prove yourself wrong. If I’m wrong, what else might be true? This is more rigorous, but it specifically aims to fight confirmation bias.

As designers, we tend to fall in love with our first idea. This makes it easy to leave good design on the table. Ex:I’ll give you a sequence of numbers, you add one and I’ll tell you if it fits the pattern. In a sequence of 2, 4, 6, we likely choose 8. This fits the pattern. What is the pattern? We think it’s even numbers. Computer says no. What if the pattern is 2, 4, 6, 7, the pattern being that each number is higher in sequence than the last? This is perhaps more elegant and interesting, but we don’t get this far because we were happy with even numbers as a solution.

There are tools out there to help us prevent this outcome: Let’s look at some: red team/blue team. The idea is that you have a blue team that does initial research, but before they proceed the red team comes in for a day and goes to war with the blue team. Their job is to look for every hidden agenda, look for every more elegant solution, every potential cause of harm that the blue team may have missed. This approach is economical.

Speculative Design: See the show Black Mirror (and see Jack’s talk from week 1 of this conference!). Twilight zone for tech. Think of a near future tech and tell a story about what might happen in human hands.

Speculative design is a real job. Superflux went to the UAE to figure out the future of energy. Core question was should we continue investing in fossil fuels or switch to renewables? Superflux looked at air quality over time if they continued using fossil fuels. They bottled this air, and made the UAE reps breathe it. After 10 years, the air was unbearable. Outcome: UAE announced that by 2050 they plan to have invested over $150 billion in renewables.

Assumption Audit:This is another tool which allows you to get biases out on the table before you begin. Gather your team and ask:

  1. What identities does your team represent – Self-identified as far as you are comfortable, but consider intersectionality (gender, rage, age, disability, income, location, neurodiversity, education etc)
  2. How might those identities influence the design/outcome of this experience/project?
  3. What identities are not represented?
  4. How might that absence of perspective influence the design/outcome of this experience/project.
  5. What might we do to include, honor, and give power to those perspectives in your design process?

Again, words matter. Include means include! Honor means pay them! Give power – particularly for those who will be impacted by your design – how can you find ways to give them power?

Deformation Professionnelle: This bias is around seeing the whole world through the lens of your job.

Ex: Paparazzi who ran Princess Diana off the road – doing a great job by definition of job=getting photos. Doing a lousy job as human beings.

Another charged example: Chief Ramsay, former Police Commissioner in Philadelphia would ask officers What do you think your job is? Many replied: To enforce the law. He then asked: What if I told you your job is to protect civil rights? This encompasses law enforcement, but it is a much bigger job, because it forces you to treat people with dignity. NB:The slide used for this part of the talk (featuring an image of Black Lives Matters protestors with placards) has been in David’s presentation since 2017 but reads differently in 2020. It creates a lot more emotion now, but he keeps it in because now more than ever it is clear that the way we define our jobs is a matter of life and death.

That day, Chief Ramsay was telling his officers your jobs are harder than you think.” David is here to tell us: Our jobs are harder than we think.
Our jobs are not simply ‘design cool stuff’. Find a way to define our jobs in a way that allows us to be more human to each other.

Some folks are already working on this. Mike Monteiro has created a redbook of Design Ethics. Hippocratic Oath for designers (first, do no harm).

Design Justice Network has come up with a set of 10 principles. Here are the first two:

  • We use design to sustain, heal, and empower our communities, as well as to seek liberation from exploitative and oppressive systems
  • We center the voices of those who are directly impacted by the outcomes of the design process.

If you just take those two, you’ve got your work cut out. This is the difference between user-centered design vs shareholder-centered design. We often think we are practicing the former when in fact we are practicing the latter.

The Tarot Cards of Tech: Artefact Groups created these cards as a tool to give you provocative questions about your design. Ex: How might cultural habits change how your design is used and how might your products change cultural habits. Think about how Twitter might have evolved differently had they stopped to ask this.

neveragain.tech: This is playing out in the world of software engineering also. The never again pledge was signed by a large number of engineers when they saw their software was being used to hurt immigrants. They pointed to a long history of data being used to hurt vulnerable populations. If necessary, they vow to do things like destroy data sets that are immoral.

Project Maven:Google’s Battlefield AI. The same collective action is starting to play out in places like Google. The people working on this said they were not in this business to make weapons and threatened to leave. This led to Google walking away from a $250 million military contract. [n.b. they then turned around and did the exact same thing with Dragonfly, a censored search in China. These battles are still being fought]

We must rapidly begin the shift from a ‘thing-oriented’ society to a ‘person-oriented’ society. When machines and computers, profit motives and property rights are considered more important than people, the giant triplets of racism, materialism, and militarism are incapable of being conquered.

This is not a quote from a software guru at a TED talk. This is Dr Martin Luther King Jr

MLK recognized this fifty years ago. So today, David asks us: How might we define our jobs in a way that allows us to be more human to each other?Thankyou.