Designing Inclusive Products

Most of us start projects with good intentions—we want to make things welcoming, seamless, and maybe even fun to use. But too often, toxic cultures within tech result in products that have all sorts of biases embedded in them: “smart scales” that assume everyone wants to lose weight, form fields that fail for queer people, résumé-reading algorithms that are biased against women, image-recognition software that doesn’t work for people of color. As tech becomes increasingly central to our users’ days—and intertwined with their most intimate lives—we have more responsibility than ever to consider who could be harmed or left out by our decisions.In this talk, we’ll take a hard look at how our industry’s culture—its lack of diversity, its “fail fast” ethos, its obsession with engagement, and its chronic underinvestment in understanding the humans it’s designing for—creates products that perpetuate bias, manipulate and harm users, undermine democracy, and ultimately wreak havoc. Then, we’ll talk about what we can do about it: how we can uncover assumptions in our work, vet product decisions against a broader range of people and situations, have difficult conversations with our teams and companies, and pursue a more ethical and inclusive way forward for our industry.

Sara Wachter Boettcher – Designing Inclusive Products

A piece of design that has people concerned in America – the census will be asking people about their citizenship. This is a very touchy subject in context! The Centre for Survey Management pre-tested the wording and found some Spanish-speaking people were afraid to provide information about people who lived in the same house.

The problem for the census is that if people don’t provide real information, decisions that rely on it become skewed. That includes things like where public infrastructure is built, even the way electoral boundaries are drawn and how many members of Congress are allocated to the state.

Census data is supposed to be private and confidential, but the US may decide it is “under threat” – at which point the government can de-anonymise the data. Which is how 1940 census data was used to identify and inter japanese citizens during World War II.

Every design decision has consequences.

This can be simple. Google Maps added a feature that told you how many calories you might burn if you walked somewhere; and they included how many mini cupcakes that would burn. That raised a lot of questions, from the practical (which cupcakes? what’s a mini cupcake anyway?) to the much deeper ones… like do they realise how much shame this can trigger for someone who has had eating issues?

The issues included:

  • no way to turn it off
  • dangerous for people with eating disorders
  • ‘feels shamey’
  • “average” calories counts are wildly inaccurate
  • cupcake is not a useful metric
    (see slide for the rest, good list)

Because someone spent time tweeting the issues with the feature… and it was pulled after just three hours. Google probably spent more than three hours just designing the cupcake image.

As an industry, we have normalised a chronic under investment in inclusion and harm prevention.

Amnesty International (march 18) declared that Twitter is failing in its responsibilities to protect human rights, to protect women’s rights. Jack Dorsey responded that they were “not proud” that they have been failing on this. But Twitter has been admitting they “sucked at” stopping trolling and abuse for many years.

If you really want to change something, you change it.

James Bridle published an article with examples of nasty content being targeted at children – knockoff Peppa Pig videos on youtube, tagged with “keyword salad” so they come up for children. The videos are autoplayed based on other content and suddenly the kids are seeing psychologically disturbing content. But its designed so that the parent may not realise it while their child watches the videos.

Much of the discussion leads to issues like this, but it’s not just content moderation it’s also an issue of product design.

Consider that Twitter was built for four young guys in silicon valley, who could safely share a lot of information about themselves without experiencing negative results. As their user base grew they quickly heard from users that they were experiencing abuse on the platform, but they were very slow to respond.

The search for hockey stick growth tends to be at the expense of ethical concerns. Mark Zuckerberg has been facing a congressional hearing in the past few days, due to Facebook’s failure to protect user data in the Cambridge Analytica scandal. The apologies that Zuckerberg has published read amazingly similarly to those published by Twitter – they didn’t do enough, they were too optimistic, too focused on the good.

But Facebook used to pride itself on moving fast – Move Fast And Break Things. They prized progress and speed over slowing down. But if you don’t slow down and think, you don’t even know what you’re breaking.

Do people remember Tay.ai ? Microsoft’s attempt to create an AI that talked to teens and talked like teens, but trolls immediately trained the AI with extreme keywords. They tweeted so much abuse to the AI that it literally started tweeting aggressively anti-Semitic content within 24 hours.

Withings had some notifications and encouragements about losing weight built into their system, but it didn’t handle the scenario where someone was tracking their toddler’s health stats. Toddlers gain weight because they are growing! It also congratulated a new mother for “hitting a new low weight” just after giving birth.

The year Eric Meyer lost his daughter, he received a Facebook “year in review” notification with her photo. The photo he had posted after she died was algorithmically chosen as the “most popular” photo and surrounded with people dancing, balloons and streamers.

(For more on this story, see Inadvertent Algorithmic Cruelty and Eric & Sara’s book Design for Real Life)

Facebook has apologised for the mistake, but has it really fixed things yet? They used a screenshot of a death threat to advertise Instagram (“your friends are using Instagram” with an example post).

Tumblr sent someone a push notification “Beep beep! #neo-nazis is here”, supposedly because they had read articles about the topic during some research. When queried about it, Tumblr admitted that they were worried about potential problems with the feature, but launched it “because it performs really well”.

Inclusion may be inconvenient to your business model.

We are creating things in the context of biased norms.

FaceApp was a selfie-modifier, which created alternative versions of your photo – older, younger and “hotter”. But they’d seeded the underlying neural network for “hotter” mostly with images of white people. So a side effect of the app was people of colour found that to be “hot” meant to be lighter-skinned.

Google Photos had a huge blow up when it accidentally categorised PoC as ‘gorillas’, which is a very deeply insulting term. As part of their apology they talked about making the algorithms much better at recognising images of PoC. The problem with that is that it’s a sort of tacit admission they were happy to launch a product which worked much better for white people than anyone else.

Are we ok with this? Are we ok with products that are only tested for white people? This kind of bias is baked into all kinds of products.

AI and Natural Language Processing is easily skewed by its seed data, but in many cases people will be working on data sets they didn’t create themselves. This is how you end up with image processing associating kitchen implements with women.

Silicon Valley is full of people who are great at tech but out of their depth with the social impact. The infamous “Google Memo” actively promoted “de-emphasising empathy”, claiming being emotionally unengaged helped people make better decisions.

Whose job is it to define “good”? Whose job is it to understand historical context, to anticipate unintended consequences? Should there be a specialised team?

We need to uncover assumptions about normality in our work. Our assumptions say a lot about us, but very little about other people in the world.

The old dad joke applies: when you make an assumption, you make an ass out of u and me.

Facebook had deeply assumed people had a good year, with good and positive experiences they wanted to be reminded of at the end of the year. They did not anticipate people having bereavements, or their apartment burning down.

We need to ask questions like “where did the data come from” and what biases it may contain. Ask who decides what “good” looks like and who things work for.

We need to agree on ethical values in our work.

We need to design to include. Our work has power – this is not a new realisation, but we need to think about how that power is used.

Do we want to keep chasing hockey stick growth and unicorn valuations? Or do we want to make the world a better place?

@sara_ann_marie | rareunion.com | sarawb.com