Design for Cognitive Bias: Using Mental Shortcuts for Good Instead of Evil

(upbeat music) - Welcome to Design for Cognitive Bias: Using Mental Shortcuts for Good Instead of Evil. We can certainly talk about how to use them for evil, but I feel like we're pretty good at that already. My name is David Dylan Thomas.
I'm a Content Strategy Advocate at Think Company an experienced design firm out of Philadelphia. And I wrote a book called a "Design for Cognitive Bias" and the road to that book, really started with a podcast I do called The Cognitive Bias Podcast. The road to that podcast began with this woman. Her name is Eris Bonnet, and she did a talk a while back called Gender Equality by Design which blew my mind. And one of the amazing things she talks about is this notion that a lot of implicit gender bias or racial bias really boils down to pattern recognition. So imagine you're hiring a web developer, and the image that pops into your head when I say the words web developer might be skinny, white guy.
And it's not because you actually believe that men are better programmers than women far from it, but the pattern that's been set up over your life in movies and television, offices you may have worked in, right, makes that equation and so if you see a name at the top of a resume that doesn't quite fit, skinny, white guy, you start to give it the side eye.
So when I realised that something is terrible as racial or gender bias could and a lot of cases boil down to something as simple and there's a human as pattern recognition, I decided I needed to learn everything I possibly could about cognitive bias and so I did.
This is the rational Wiki page for cognitive biases.
There's well over 100 here, and I realised, "okay, "I'm not gonna figure this all out a day." So I took one a day and this turned me into the guy who wouldn't shut up about cognitive bias.
And eventually, my friends were like, "Please Dave, just get a podcast." Now, it's worth establishing from the jump, what is cognitive bias.
And at the end of the day, it's just a series of shortcuts. Your mind is taking just to get through the day, you have to make something like a trillion decisions every day, even right now I'm making decisions about where to look, what to do with my hands, how fast to talk.
And if I thought carefully about every single one of those decisions, I never get anything done.
So it's actually a really good thing that a lot of our lives are lived on autopilot but the autopilot sometimes gets it wrong and we call those errors, cognitive biases. Here's a fun one.
It's called illusion of control.
And it happens when you're playing a game where you're rolling a die.
If you need a high number, you'll roll that die really hard. If you need a lower number, you roll it really gently. And everyone here knows that it makes absolutely no difference how hard you throw the die, but we like to think we have control in situations where we have no control, and so we embody that desire by how hard we throw the die. This one is not so harmless.
It's called confirmation bias, and it happens when you have an idea that's just stuck in your head and you really only look for evidence to confirm that idea, and if you ever see evidence that doesn't confirm that idea, you say fake news, and you move on.
Now, we saw pretty powerful example of this at the beginning of the Iraq War.
The whole idea was we need to go into Iraq, because Saddam Hussein has weapons of mass destruction, and we need to get him before he gets us.
After we got in there, turns out not so much with weapons of mass destruction, and even basically a year into the war the President of the United States says, "Yeah, we didn't really find anything." Even then you still have lots of people believing that there were in fact, weapons of mass destruction in Iraq, so much so that even 14 years later, right? There were let's say, over 50% of Republicans and 30% of Democrats believed that there was in fact, weapons of mass destruction in Iraq.
This is a very powerful bias, and we are gonna come back to it.
So these are really hard to fight, part of the problem is you may not even realise that you have the bias, right? There's literally a bias called the bias blind spot where you think you don't have any biases, but you are sure that everybody else does and part of the reason they're hard to spot. Is that about 95% of cognition, happens below the threshold of conscious thought, right? So you're having, making these decisions so quickly, you don't even realise you're doing it.
In fact, the next time somebody asks you why you did something, the most honest answer you can give them is how the hell should I know? Even if you do know about the bias, you'll probably do it anyway.
So there's a bias called anchoring.
The way it works is I could ask everybody watching this to write down the last two numbers of their phone number and then I could say, "We're gonna bid on a bottle of wine." Those of you who wrote down lower numbers are gonna bid lower.
Those of you who wrote on higher numbers are gonna bid higher.
Now, here's the thing I could tell you about anchoring before we did the experiment, and you would still do it.
In fact, I could say "Hey, "I'm gonna pay you cash money not to do it." Still do it.
Now, the good news is that there are in fact content and design choices we can make that can help keep some of these biases at bay or even use them for good.
So let's return to that skinny white guy.
As it turns out and experiment after experiments, if you have two identical resumes, and the only difference is the name at the top of the resume, the male ones will move up the chain, the female ones will stay on the pile, but here's the thing, why do you need that information? What about the name is helping you the hiring manager decided to hire, think of it like a signal to noise problem, right? The signal, the thing that's actually helping you make the decision are things like qualifications and experience. The thing that might actually be getting in the way, are things like gender or race or whatever you're reading into the name in terms of gender and race.
So the City Philadelphia actually did this, they did an anonymized round of hiring for a web developer position, and they discovered a couple things.
One, even in the high-tech world of web development, the best way to actually anonymize resume is to have an intern who has no stake in the hiring process, physically printed out, get a marker and redacted like it's a document. The other thing that they found was that even when they found a resume they liked, the next natural step would be to go to GitHub code repository to basically see that developers portfolio, and, of course, the second they got to GitHub, and the profile loaded, all the personal information would be there and run the experiment.
So clever people that they were, they created a Chrome plugin, that would anonymize that information as it loaded. And then just to complete the circle, they took that code and they put it back on GitHub, it's in there now if you ever want to try this yourself. Now, another really important phenomenon to understand is cognitive fluency, and this is the idea that if something looks like it's gonna be easy to read, we'll assume whatever it's talking about, it's probably easy to do, but the same token, if something looks like it's gonna be hard to read, we'll assume whatever it's talking about, it's probably hard to do.
Now, I don't know about you, but I've been making a lot of pancakes lately. This is a recipe for pancakes, and text is kinda small and clump together and I might glance at that and I think, you know what about pancakes are pretty hard. I don't know if I'm gonna make pancakes.
I could take that same content and give it nice big full width imagery and little short bursts of text and I might glance at that and think I bet pancakes aren't that hard to make. Yeah, maybe I'll make pancakes.
A two minute video, forget about it.
We're making pancakes.
Now, think about deciding whether or not to drive into town or take public transportation. If I take a look at this printed schedule on the left, I completely conclude that it is impossible to use public transportation.
I'm gonna drive.
I look at that app on the right.
I think you know what, I bet public transportation isn't that hard? Maybe I'll give it a shot.
I'm gonna just ask everybody here to vote in their hearts since this is recorded, I can't see you vote. So just raise your hands wherever you are.
How many of you think that Rosa Parks was born in 1912? Okay, how many of you think that Rosa Parks was born in 1914? Okay, you're both wrong.
She was born in 1913 but the point is, - people usually pick 1914 and the reason is, if something is easier to read we actually think it's more true but it gets worse. If something rhymes, we actually think it's more true and this has consequences. Now, what's happening is that we love certainty, and things that are easy to process feel more certain. I'll give an example.
If I asked you, what did you get for your fifth birthday? It was a toy truck. Right? You might say, "I don't know, I can't remember that." Well, I'm not very certain about that answer. It's hard to remember, it's hard to process, doesn't feel very certain.
If I asked you, "Hey, what do you have "for breakfast this morning?" Image might come right into your head, right? It's easy to remember, easy to process, you feel very certain about that answer.
Things that rhyme are easier to remember they're easier to process, they feel more certain. Things that use plain language and big fonts and clear imagery are easier to process feel more certain. Now this becomes important when we're talking about things people need to believe. So there's a crisis in America where African-Americans generally speaking do not believe, health information that comes from the government. If you look at a 2002 survey the statement, the government usually tells the truth about major health issues like HIV AIDS, only 37% of African-Americans agreed with that statement. By the time we get to 2016, that number has dropped to 18%. Now we can do a whole other talk about why there are legit reasons.
African-Americans have concerns about health information coming from the government. The fact remains, this is information that could save lives.
So if it needs to run, if it needs to use plain language, if it needs to use big fonts and imagery, so be it. Now when I originally put this information in the book, my editor very wisely challenged me and said, "Well, this is an interesting idea, "but can you actually point two examples "where this has played out?" And I'm glad she did, because it forced me to do the work and find some really interesting stuff like this. So this is a situation where you had women who were smoking while pregnant and when they were given materials at the third grade reading level right easier to process, they were more likely to abstain from smoking during pregnancy, and even six weeks postpartum. Similar situation you have people who are helping other people.
Caregivers, helping other people take their medicine, and when they were given a plain language, pictogram based intervention, you saw decreased medication dosing errors and improved adherence to actually taking the medicine. Now, you might think, okay, that's great for plain language and pictograms, but rhyming really? So let me tell you about Click it or ticket. So not too long ago, in the States, a lot of different states rolled out legislation that said, Okay, look, if you don't buckle your seatbelt, you can get a ticket.
And the legislation on its own did a lot of good, especially among older drivers, but younger drivers, not as much.
So they rolled out, Click it or ticket.
And the results for that national belt use among young men and women ages 16 to 24, moved from 65% to 72%, and 73%, to 80%, respectively.
And just to put that in more human terms, for every percentage point, you go up, and people buckling their seatbelt, about 270 lives are saved.
So if you do the rough math, that's about 4000 lives saved, in part through rhyming. It's silly, but it works.
Now, the most dangerous bias in the world for my money is the framing effect and it starts out pretty innocently. Imagine you go to a store and you see a sign that says, beef 95% lean, and next to that you see a sign that says, beef 5% fat, which one do you think people are gonna line up for? It's the same thing but I've managed to frame one to seem a little more appealing.
Now this is all good and well when we're talking about beef, but what if I were to say, should we go to war in April? Or should we go to war in May? See what I did there, were no longer discussing whether or not it's a good idea to go to war in the first place, and wars have been started over less.
Now, if you are multilingual, you have a secret weapon against the framing effect.
If you can think about the decision in your non-primary language, you are less likely to fall for the trap.
So I speak a little bit of French.
So if I were to try to think about the beef decision in French would go something like this.
Let's see, beef that's perfect, has a lot of ALS. 95% that's (indistinct) nerve, maybe you, and by the time I'm done all that processing, right? I can see right through the scam.
The framing effect can actually be used for good as well. So there's an experiments where you show an audience photo like this, and you ask them, should this person drive this car? And what you'll get is a policy discussion, right? And some people say, "All people are bad at everything." They'll let them drive and other people will say, "That's ageist how dare you let people do what they want." All you're gonna learn by the end of that conversation is who's on one side? Now you can show that same photos for another audience and ask how might this person drive this car? And what you'll get is a design discussion. And some people will say, "Oh, what if we change the shape "of the steering wheel?" Or, "What if we move the dashboard?" Right? And what you'll learn by the end of that conversation Is several different ways that person might be able to drive that car. I only changed a couple of words, but by changing the frame of the conversation, I changed the entire conversation.
In fact, what if I were to zoom out a little more And I would ask how might we do a better job of moving people around, 'cause that's the reason the guy was in the car in the first place.
He was here but he wanted to be there.
And if I frame it this way, things like public transportation are now on the table. I wanna end by talking about our cognitive biases, 'cause these are the ones that can really get our users in trouble.
So I told you, we'd come back to this.
I used to have a complete misconception of what the scientific method was, I thought it was. I have an idea about how the world works, I'm gonna test that idea, and if I get a good result, I'm gonna write down what I did, and a whole bunch of other people are gonna try what I did, and if they get the same result, great, write it down. It's a law, let's move on.
After talking to some actual scientists, I found that it's a little more complicated than that. So I have an idea.
Test it out, see if I'm right, if I get a good result, I write down what I did a whole bunch of other people try that, if they get the same result, great.
I get to now spend the rest of forever, trying to prove myself wrong.
I have to ask myself, if I'm wrong, what else might be true? Okay, let me go and try and prove that.
Now that is a much more rigorous approach, and it was invented specifically to fight confirmation bias. Now, as designers, it's very easy for us to leave good design on the table because we fall in love with our first idea. Let me show you how easy.
So let's say I'm gonna play a game with a computer and the computer is gonna say okay, I'm gonna put all these numbers here with a question mark, you put whatever number you want where the question mark is, I'll tell you whether or not it fits the pattern. Put in as many numbers as you like, and when you're done, tell me what you think the pattern is. If you're like me, the first number you try is eight. And the computer says, congratulations, that fits the pattern.
Would you like to try another number? And if you're like me, you say nah got this, the pattern is even numbers.
And the computer says no.
And the reason the computer says no is 'cause I didn't try this.
The pattern is not even numbers.
The pattern is simply every number is higher than the number that came before it, which is a much more elegant solution.
Probably easier to code, probably cheaper for your client. But I never got there, because I was so in love with my even numbers idea. Now there are tools out there to help us prevent this outcome.
One of them is called a red team, blue team. And this idea that you have a blue team.
Who does like the initial research, maybe gets you as far as the wire frame, but before they go any further, the red team comes in for one day, and the red team's job is to go to war, but the blue team, they're there to find every hidden agenda, every more elegant solution, every potential cause of harm that the blue team missed because they were so in love with their initial idea. And I like this approach, because it's fairly economical. I don't have to go to my boss and say, "Hey, guess what, "we got to spin up two teams now for every single project, "and they're gonna check each "other's work every single day." No, I need one team, for one day to make it a little less likely, we're gonna put something harmful out into the world. Another great tool for this is called speculative design.
And it's kind of like that show "Black Mirror" which if you've never seen it, it's kind of a twilight zone for tech, you think of a near future technology, and you tell a story about what would happen if actual human beings got their hands on it? And the answer is usually terrible.
In fact, I think anybody working on a new technology, by law should have to write a Black Mirror episode about it first.
Now, this is a real job.
Super Flex went to the United Arab Emirates to help them figure out the future of energy. And the question on the table was, should we continue down the road of fossil fuels? Or should we start investing in renewables? And one of the things that Super Flex did was to figure out, okay, what is your air quality gonna be like, in five years, 10 years, 15 years, if you continue down the road of fossil fuels, but they didn't just figure it out, they bottled it. And then they made them breathe it.
And by the time we get to about 10 years out, it is unbreathable.
By the end of that engagement, the United Arab Emirates announced they were going to invest over $150 billion in renewables.
Another great tool here is an assumption audit. And it's a way to get all your biases out on the table, before you begin, what you do is you get your team in a room before you kick off, and you ask these five questions. One, what identities does your team represent? And you only self identify as you feel comfortable, but you think about things like gender and age and income and neuro-diversity.
Then you ask, how might those identities influence the outcome of this project? Then you ask, who isn't here, right? Anybody here ever been incarcerated.
And anybody here live below $27,000 a year. Then you ask, how might that absence of perspective influence the outcome of this project? And finally, you ask, what might we do to include honour and give power to the people whose perspective isn't here right now? And I choose those words carefully, right? So include, yes, definitely talk to people honour, maybe pay them.
Give power.
See if especially for the people who are gonna be impacted by what you design, but have no power or say and how it gets designed. What can you do to give them a say, right? 'Cause they're the ones who are gonna live with the thing you make. The last bias I want to talk about it's called the deformation professionnelle. And I told you, I speak French.
It's a bias where you see the whole world through the lens of your job, which in the workaholic world we live in, might seem like a good idea.
Right up until it's not.
So the Paparazzi who ran Princess DI, die off the road that night probably thought they were doing a good job. And technically they were right? They were getting really difficult to get photographs that were gonna fetch them a really high price. What they weren't doing such a good job of was being human beings.
Now the former police commissioner of Philadelphia, when he took the job, he said to his officers, "What do you think your job is?" And a lot of them would say to enforce the law. And he would say, "Okay, that sounds great and all "but what if I were to tell you your job "is to protect civil rights." Now that encompasses enforcing the law, but it's a much bigger job.
Because it forces you to treat people with dignity. Now, this slide has been in here since 2017. And this year, obviously it has become far more visible and I get far more emotional about it, but I keep it in here because now more than ever, it is clear that the way we define our jobs is a matter of life and death.
Now, Chief Ramsey was telling his officers that day, your jobs are harder than you think.
And I'm here to tell us, Our jobs are harder than we think.
It's not just designed cool stuff, right? We have to find a way to define our jobs in a way that allows us to be more human to each other. Now, some folks are already working on this, you have Mike Montero on a mule design, he's created a little red book of "Design Ethics." Right? It's kind of a hippocratic oath first, do no harm for designers.
You have the Design Justice Network, who is doing amazing work in this area.
And they have these great 10 principles, I'm just gonna read you the first two, one, "We use design to sustain, heal and empower our communities, as well as to seek liberation from exploitative and oppressive systems." Two, "We centre the voices of those who are directly impacted by the outcomes of the design process." Now, if you just stick to those first two, you've got your work cut out for you.
And it's the difference between what Erica Hall calls user-centered design and shareholder- centred design.
We often think we're practising the former when in fact, we're practising the later.
There's another great intervention called The Tarot Cards of Tech, when you go there and you see these cards and you click on them, and they flip around and give you these really provocative questions about your design.
How might cultural habits change, how your product is used? And how might your product change cultural habits? If Twitter had asked itself, this before it launched, we might see a very different world today.
We're seeing this play out in the world of software engineering as well.
There's the never again pledge which a lot of software engineers come together and signed when they saw that their work was being used to hurt immigrants.
And they pointed to a very long history of data being used to hurt vulnerable populations, and they said we don't wanna be a part of that history. And so they vowed to do things like if necessary, destroyed data sets that were immoral.
We see the same collective action start to play out in places like Google.
Where you had Project Maven, which was a battlefield AI. And the people working on it said, "Look, we didn't get into this business to build weapons. "If you make us keep working on this, we're gonna walk." And Google backed down they walked away from a $250 million military contract.
And then they turned around and did Project Dragonfly, which was a censored search in China.
So these battles are still being fought.
We must rapidly begin to shift from a thing oriented society to a person oriented society. When machines and computers, profit motives and property rights are considered more important than people.
Then the giant triplets of racism, materialism, and militarism are incapable of being conquered. Now, this is not some software guru at a TED talk. Smart Luther King, over 50 years ago, he saw this was true, and it's only more true today. So the question I will ask all of us is, how can we define our jobs in a way that allows us to be more human to each other? Thank you.
(upbeat music)
Users’ minds take shortcuts to get through the day. Usually they’re harmless. Even helpful. But what happens when they’re not? In this talk, based on my book from A Book Apart, I’ll use real-world examples to identify some particularly nasty biases that frequently lead users to make bad decisions. I’ll then talk about some content strategy and design choices we can use in our apps, designs, and platforms to redirect or eliminate the impact of those biases. Finally, I’ll explore our own biases as designers and some methods to prevent our own blind spots from hurting users.
Design for Cognitive Bias: Using mental shortcuts for good instead of evil.
David Dylan Thomas: Author, speaker, filmmaker.
Keywords: Cognitive biases, illusion of control, framing effect, cognitive fluency, tools, design
TL;DR: David walks us through a number of examples of cognitive biases and how they drive outcomes subconsciously, even at times when we are aware we have them. Being aware of our own biases takes work and by virtue of the fact that they are biases, tricky to explicate. David walks us through a set of tools and practices available to assist us with identifying our biases before they cause problems, as well as examples of companies and organizations who are using these tools toward harnessing our biases toward asking more effective questions and enabling more inclusive design.
David is a Content Strategy Advocate at Think Company, an experience design firm based in Philadelphia. He wrote a book called Design for Cognitive Bias The road to the book was a podcast he hosts called The Cognitive Bias Podcast and the road to the podcast began with a talk by Iris Bohent called Gender Equality by Design.
Bohent talks about how a lot of implicit gender or racial bias boils down to pattern recognition. Ex:Imagine you’re hiring a web developer: The image you may think of is ‘skinny white guy’, not because you actually believe men are better programmers, but patterns set up across your lifespan from media, work history, culture, that cause brain bias. This may lead you to dismiss names on top of a resume that do not fit that profile.
When David recognized that something as terrible as racial or gender bias could boil down to something as simple and human as pattern recognition, he set out to learn all he could about cognitive bias.
Began with the rational wiki page for cognitive biases which lists over one hundred biases, so he took one per day to explore, which led to the podcast and book.
It’s worth establishing from the jump what cognitive bias is. Essentially, it’s a series of shortcuts our minds take that often help, but sometimes hurt. We have to make thousands of split-second decisions a day and if we stopped to think about each, we’d never get anything done. It’s a helpful trait our brain has which allows us to live much of our lives on autopilot. But autopilot can be wrong, and we call these errors cognitive biases.Let’s explore some:
Illusion of control: We’ll start with a fun example.Say you are playing a die rolling game. If you are looking for a high number, you roll hard, for a low number, you roll soft. We know this makes no difference but we like the illusion of control.
Confirmation bias: A less harmless one. Think back to the Iraq war and weapons of mass destruction. These were the premise for going in, none were found, but even then many people still believed (and do to this day) that they were there.
These biases are extremely difficult to combat. You may not know you have them (bias blind spot). 95% of cognition happens below the threshold of conscious thought so even though you’re making decisions, you may not realize why.
Even if you know about the bias, you will likely still do it. Ex: Anchoring bias: If David asks you for the last two digits of your phone number and then asks you to bid on an auction item, those with lower numbers will bid lower, and vice versa. He can tell us this even before running the experiment and we would still do it.
The good news! There are content and design choices we can make that can help keep harmful cognitive biases in check – or even use them for good.
Returning to the skinny white guy: Multiple experiments show that if shown identical resumes where the only difference is male vs female, males get selected. What about the name is helping you the hiring manager?
Analogous to a Signal vs Noise problem. The signal which actually helps you make the decision are things like qualifications and experience. The noise that gets in the way may be gender or race, things that you are reading into the name.
Ex: The City of Philadelphia did an anonymized round of hiring for a web developer and found two things: Even in the tech world they found that first, the best method for anonymizing is to have an intern with no stake in the hiring process print the resume and redact the name with a marker. Second, even if recruiters liked a resume, the next step would be to look at the developer’s portfolio at which point the profile information would be visible, rendering the experiment useless
To resolve this, they created a Chrome plugin to anonymize that information as it loaded. (FYI they also took this code and put it back on GitHub, you can find it there now if you ever want to try this technique).
Cognitive Fluency: This is the idea that if something looks like it will be easy to read we’ll assume whatever its talking about will be easy to do, and vice versa.
David’s into pancakes at the moment! Ex: pix of two pancake recipes with poorly formatted and detailed written text alongside the same text simplified and accompanied with visual images showing the necessary steps. The latter is more likely to make you feel like you can make the pancakes.
Leveling up – if you put this information into a two minute how-to video, we’re for suresies making pancakes!
Another ex: Do you want to drive to town or take transit? Ex: look at picture of printed, difficult to read, detailed train schedule alongside screen grab of the slice of needed information in the app view. The app view is much simpler to understand and therefore more likely to induce user to take transit.
Similarly, If people are shown two almost identical written statements ex: ‘Rosa Parks was born in 1912’ and ‘Rosa Parks was born in 1914’ but one is written in slightly larger and coloured font, when asked which statement is true (sidebar, both are actually wrong – it was 1913) people typically choose the easier to read answer as they assume this is true. Worse, if something rhymes (An apple a day keeps the doctor away we also think it is more true.
This has consequences. We love certainty and things that are easy to process feel more certain. For e.g if asked for a memory when you were five, you may not remember or feel certain, but if asked what you had for breakfast, it’s easy to recall. Things that rhyme or use plain language, clear fonts or imagery are easier to process and feel more certain.
This becomes important when discussing things people need to believe. America has a crisis where African Americans generally do not believe health information that comes from the government. A 2002 survey showed only 37% of African American respondents endorsed the statement that The government usually tells the truth about major health issues like HIV/AIDS In 2016, that number had dropped to 18%. We can do another whole podcast on why there are many legitimate reasons African-Americans have concerns about government issued health information, but the fact remains that this information can save lives. If it needs to rhyme or use plain language so be it.
When David originally included this information in his book, his editor challenged him on this: This is an interesting idea, but can you point to examples where it has played out?Short answer, yes.
This study showed that pregnant smokers and ex-smokers who received an intervention with materials written at a third grade reading level were more likely to achieve abstinence than those who received standard materials. A similar study showed that a plain language, pictogram-based intervention as part of medication counseling resulted in decreased dosing errors and improved adherence. among multiethnic, low socioeconomic status caregivers in helping administer medicines.
Click it or ticket legislation. Wear a seatbelt or get fined. Older drivers adhered, younger did not. When re-branded as click it or ticket, seat belt use in youth improved dramatically. To put this in human terms, for every percentage point you go up in seat belt use, 270 lives are saved. Extrapolating the math, this means 4000 lives are saved in part through rhyming!
The Framing Effect: In David’s view this is the most dangerous bias in the world. Ex: You’re at the store. One package says ‘Beef, 95 percent lean’ and another says ‘Beef, 5 percent fat.’ People choose the lean option. If you change the stakes: ‘Should we go to war in April or should we go to war in May?’ it’s much more problematic. We are not discussing whether to go to war in the first place, just when. Wars have been started over less.
If you are multilingual, you have a secret weapon against the framing effect. If you can think about the decision in your non-primary language, you are less likely to fall for the trap.
The framing effect can also be used for good: If you show an audience an image of an older person driving a car and ask Should this person drive this car?, you will get a policy discussion. This will demonstrate only on what side various audience members are on. But show the same image with the question How might this person drive this car? and you will get a design discussion. You will learn several workable methods and solutions. Changing the words changes the frame, which changes the entire conversation. Zooming farther out, what if we ask How might we do a better job of moving people around? This brings in broader conversations around public transport.
What about our own cognitive biases? These are the ones that can get our users in trouble. David used to have this idea of the Scientific Method: I have an idea. I’m going to test that idea, write down what I did, a bunch of other people can try it, and if they get the same result it’s true, let’s make it a law and move on. More to it than that.
Instead… I have an idea… Do all of the above steps but when you get to the point that you think you have proven something, then spend your time trying to prove yourself wrong. If I’m wrong, what else might be true? This is more rigorous, but it specifically aims to fight confirmation bias.
As designers, we tend to fall in love with our first idea. This makes it easy to leave good design on the table. Ex:I’ll give you a sequence of numbers, you add one and I’ll tell you if it fits the pattern. In a sequence of 2, 4, 6, we likely choose 8. This fits the pattern. What is the pattern? We think it’s even numbers. Computer says no. What if the pattern is 2, 4, 6, 7, the pattern being that each number is higher in sequence than the last? This is perhaps more elegant and interesting, but we don’t get this far because we were happy with even numbers as a solution.
There are tools out there to help us prevent this outcome: Let’s look at some: red team/blue team. The idea is that you have a blue team that does initial research, but before they proceed the red team comes in for a day and goes to war with the blue team. Their job is to look for every hidden agenda, look for every more elegant solution, every potential cause of harm that the blue team may have missed. This approach is economical.
Speculative Design: See the show Black Mirror (and see Jack’s talk from week 1 of this conference!). Twilight zone for tech. Think of a near future tech and tell a story about what might happen in human hands.
Speculative design is a real job. Superflux went to the UAE to figure out the future of energy. Core question was should we continue investing in fossil fuels or switch to renewables? Superflux looked at air quality over time if they continued using fossil fuels. They bottled this air, and made the UAE reps breathe it. After 10 years, the air was unbearable. Outcome: UAE announced that by 2050 they plan to have invested over $150 billion in renewables.
Assumption Audit:This is another tool which allows you to get biases out on the table before you begin. Gather your team and ask:
- What identities does your team represent – Self-identified as far as you are comfortable, but consider intersectionality (gender, rage, age, disability, income, location, neurodiversity, education etc)
- How might those identities influence the design/outcome of this experience/project?
- What identities are not represented?
- How might that absence of perspective influence the design/outcome of this experience/project.
- What might we do to include, honor, and give power to those perspectives in your design process?
Again, words matter. Include means include! Honor means pay them! Give power – particularly for those who will be impacted by your design – how can you find ways to give them power?
Deformation Professionnelle: This bias is around seeing the whole world through the lens of your job.
Ex: Paparazzi who ran Princess Diana off the road – doing a great job by definition of job=getting photos. Doing a lousy job as human beings.
Another charged example: Chief Ramsay, former Police Commissioner in Philadelphia would ask officers What do you think your job is? Many replied: To enforce the law. He then asked: What if I told you your job is to protect civil rights? This encompasses law enforcement, but it is a much bigger job, because it forces you to treat people with dignity. NB:The slide used for this part of the talk (featuring an image of Black Lives Matters protestors with placards) has been in David’s presentation since 2017 but reads differently in 2020. It creates a lot more emotion now, but he keeps it in because now more than ever it is clear that the way we define our jobs is a matter of life and death.
That day, Chief Ramsay was telling his officers your jobs are harder than you think.” David is here to tell us: Our jobs are harder than we think.
Our jobs are not simply ‘design cool stuff’. Find a way to define our jobs in a way that allows us to be more human to each other.
Some folks are already working on this. Mike Monteiro has created a redbook of Design Ethics. Hippocratic Oath for designers (first, do no harm).
Design Justice Network has come up with a set of 10 principles. Here are the first two:
- We use design to sustain, heal, and empower our communities, as well as to seek liberation from exploitative and oppressive systems
- We center the voices of those who are directly impacted by the outcomes of the design process.
If you just take those two, you’ve got your work cut out. This is the difference between user-centered design vs shareholder-centered design. We often think we are practicing the former when in fact we are practicing the latter.
The Tarot Cards of Tech: Artefact Groups created these cards as a tool to give you provocative questions about your design. Ex: How might cultural habits change how your design is used and how might your products change cultural habits. Think about how Twitter might have evolved differently had they stopped to ask this.
neveragain.tech: This is playing out in the world of software engineering also. The never again pledge was signed by a large number of engineers when they saw their software was being used to hurt immigrants. They pointed to a long history of data being used to hurt vulnerable populations. If necessary, they vow to do things like destroy data sets that are immoral.
Project Maven:Google’s Battlefield AI. The same collective action is starting to play out in places like Google. The people working on this said they were not in this business to make weapons and threatened to leave. This led to Google walking away from a $250 million military contract. [n.b. they then turned around and did the exact same thing with Dragonfly, a censored search in China. These battles are still being fought]
We must rapidly begin the shift from a ‘thing-oriented’ society to a ‘person-oriented’ society. When machines and computers, profit motives and property rights are considered more important than people, the giant triplets of racism, materialism, and militarism are incapable of being conquered.
This is not a quote from a software guru at a TED talk. This is Dr Martin Luther King Jr
MLK recognized this fifty years ago. So today, David asks us: How might we define our jobs in a way that allows us to be more human to each other?Thankyou.