- Our next speaker is Caroline Sinders.
She used to be an IBM research fellow with the Watson programme, and now she's at BuzzFeed.
And she has a fantastic job, because she has a fellowship at BuzzFeed, means she can work on her topic sort of all year round as her full-time job, which really sounds fantastic. She will speak about how to design for civility in an age where consent is really, really hard to achieve. And we know that through the recent events, again, how heated and how aggressive debate can become. And she will speak about how the systems that we design can facilitate consent and also leave spaces to the people that we perceive as being in the wrong.
So let's welcome Caroline.
(audience applauds) - Hi, everyone, I'm Caroline Sinders.
And this is my talk Designing Civility.
So part of my job is I study people on the Internet. I like to think that I also study how people throw shade on the Internet.
(audience laughs) Prior to September, I'd spent almost two years at IBM Watson, where I worked as a user researcher in natural language processing and helping design chat robot software.
Effectively, how do you make machines talk, and how do you help machines talk to people and sound like people, which is incredibly hard. Machines are not people at all.
(chuckles) They sound very mechanical.
And I found out, in September, that I won a fellowship called the Open Lab fellowship.
So BuzzFeed, this is our second year, they have a fellowship called the Open Lab, where creative technologists are brought in to BuzzFeed and are given a provocation.
They can solve any problem they want for a year. My provocation was can you use design and machine learning to mitigate online harassment? I'm also a fellow at I-Beam.
I-Beam gets to pick one person out of the group to be their fellow as well.
So I-Beam is an arts and technology centre based in New York City.
It's putting artists in the hand of creating the future of technology.
So for the past three weeks, I've been travelling outside the United States. I started in China.
I made my way from Shanghai to Hong Kong.
And then I made my way to Tokyo.
And then I've made my way here.
Along the way, I've been talking to journalists to try to understand the different kinds of cultural implications of online harassment. And what I found really fascinating, I'm sure this isn't that fascinating to people in the room, but is the rise of LINE and WhatsApp as being the centre of where a lot of our open communal conversation is taking place. And with the rise of LINE, WeChat and WhatApp, especially with the purchase of WhatsApp by Facebook, messaging is becoming the primary aspect of where we're engaging with people, from strangers to non-strangers.
And I think this is really important.
WhatsApp gets around 900 million monthly users, and by some it's considered to be the most emerging, most popular platform, overtaking social media. But I think WhatsApp and messaging will still have the same problems of social media, that social media does. In particular, I think of social media as a mixed emotional identity space, meaning you can use your real name.
You can use an alias.
It can be used for personal reasons.
It can be used for professional reasons.
And often time, all of those different characteristics are coming into play in one profile.
I use my real name across all different profiles, because I work in technology.
And it's easy to find me that way.
But prior to using my real name, I used a series of really set screen names.
So it's really easy to find them If you Google Cellar Paper, I probably have a really long trail.
I would tell you my LiveJournal name, but I'm not gonna do that.
(audience laughs) But you could probably find it if you spent 10 minutes Googling me and just going all the way back.
And if you knew where I lived and the high school I went to, it'd be even easier. And all that stuff is on Facebook.
So you could actually, in the course of five minutes, all of you could find my two very public, my two very high school LiveJournals.
So we live online.
We live in a very mixed way between mixed emotions, mixed identity and mixed kinds of conversations. And I think messaging will have the same problems that social media does with online harassment, particularly with the rise of bots, e-commerce, mixed identity spaces, and with almost no security filtering, beyond public and private.
So I like to ask this question.
What do you think was invented first, sms or social media? So who thinks it was sms? Okay, who thinks it was social media? Okay, not all of you voted, much like Americans. (audience laughs) So I'm gonna tell you a secret.
IRC, which I consider the grandmother of social media, was invented in 1988, the University of Oulu and Norway, and by 1989, there were over 40 IRC servers. SMS was used for the first time in 1992.
By 1990, we had our first IRC fight.
So we've been trying to solve online harassment for a very long time inside of social media. And we've had social media longer than we've had messaging. I want you to think about that for a second. The way we talk online is actually pretty old, and it predates private messaging and predates SMS. So public interaction, public talking, has been around historically longer.
So I think about this a lot now.
This is in addition to the talk I made last evening after watching the news (laughs).
(audience laughs) Because I think it's really important.
And I think it's really important to think about right now. I actually can't really talk a lot about the election, 'cause I am a fellow at BuzzFeed.
But I want you to know that I think a lot about how to create a more civil Internet when there's such civic unrest.
And I truly believe that Trump supporters have a right to be on the Internet, and they have a right to talk, because I believe that I have a right to be in that space, too.
So how do you start to create and design a civil Internet when people disagree with each other so intensely? And how do you make conversational spaces that fit a wide variety of conversations when there is such, such civic unrest? So I think design can affect online behaviour. And the reason I think this is for the past two years, I've been doing a deep ethnographic deep dive into Gamergate.
How many people know what Gamergate is? Okay, cool.
So depending upon who you ask, Gamergate is both a harassment group, a political activist group and a fandom.
If you ask Gamergate, they consider themselves political activists. They are fighting to hold on to an idea of games that is so intrinsic to their personality.
They are also a fandom.
If you ask any victim of Gamergate, if you ask any person that has been studying social media, and then online harassment, they will tell you that Gamergate is also a harassment group.
What I actually found really fascinating with Gamergate was not that they are a political activist group and a group that engages in harassment.
What I found the most fascinating is how Gamergate uses Reddit and uses 4chan and 8chan and Twitter to engage in acts that look like political activist action, as well as the kinds of conversations they are having. Now I don't think that design can particularly mitigate misogyny or racism. I think those are problems that are deeply entrenched in our society.
What I find fascinating is how people use social media to amplify voices.
And what I find the most fascinating is when you look at Gamergate in particular, is they take the style of Reddit, the conversational style of Reddit, and they apply it to Twitter.
And that's not actually that unusual.
And so what do I mean by that? Let me unpack that for a second.
So Reddit is a board based style of communication, meaning you create a board.
You post on a board.
So if you're posting something, you're threading. You're adding to the conversation.
So you're adding to a general conversation, or adding to a hyperspecific conversation, meaning if you add on to a conversation, or you start a board, you're wanting to engage in conversation.
The idea is that you're opening it up to public discourse. That's actually really different with Twitter, even if you exist on Twitter publicly.
Twitter, you create a post.
It's not exactly designed for everyone.
It can be designed for you or for your followers, and depending upon your follower count, the kinds of, the expectation users have to response differs. If you have a low follower account, it feels much more private, even though you're completely public.
And that's sort of a flaw in the design.
Or that can be rather a feature of the design, depending upon how you look at it.
But Gamergate takes the assumption that if you exist on Twitter, and you're not following them, but if you exist on Twitter publicly, you must want to engage with the entire public, meaning you must want to engage with every single user on Twitter, otherwise why would you be public? And I think we can all say safely in this room that is not how people use Twitter.
And that is not the implication of Twitter. That is not the assumption, right.
Being public and wanting to talk to every single person, that seems pretty ridiculous when you consider that there's 250 million users on Twitter, right. That seems ridiculous.
But I spent two years looking at how Gamergate talks. And that's what I find the most fascinating. And I've spent two years also talking to Gamergate as well as talking to victims of Gamergate. One of the things I wonder when it comes to designing conversational spaces is what can we learn from fandoms and activism in social media to apply to harassment? So I want you to think about this for a second. Fandoms and activism and online harassment create very similar emotional spaces inside infrastructure inside of social media, and how a Justin Bieber fan uses Twitter is not different at all than how an anonymous supporter does or how a Trump fan does.
And I want you to think about that.
So what's the difference between Wikileaks, for example, doxing innocent Turkish citizens versus Gamergate doxing their victims.
One is considered a political activist group that was trying to be the next stage in whistle-blowing. So should they get a pass? What about Gamergate? And how do we start to build large autonomous systems that understand and can recognise this behaviour, and can they differentiate each other? If a Bieber fan uses Twitter the same way that Anonymous does, but they're talking about incredibly different things, how do we then design systems of conversation that allow for those differences? Because we're talking about amplification.
Twitter, for all of it's design, actually feels much more like a two-way conversation.
It's one-to-many or many-to-one, but it's a very condensed pipeline, right.
So what I'm trying to get at is harassment is incredibly hard to solve for really big systems.
So social media is not just media.
It's a communication tool.
But it's very opaque in what is public or private. And part of harassment is also this nebulous around action and the inability for a user to mitigate or control their reach or who can see what they write. I'm gonna give you an example.
So this is some network analysis I've been doing. Does anyone know who Martin Shkreli is? Okay, for lack of a better word, he is like this big jerkface inside the United States. He purchased this antiviral drug that is used by HIV patients, and he hiked the price up so intensely.
He hiked it up to about like 700% of what it used to cost. He has probably over 100,000 followers on Twitter. So he started attacking an engineer at Medium named Kelly Ellis.
And what she's pointing out here is that Martin is engaging in what is called a dogpile, or he's encouraging it. Okay, so a dogpile is when one user is piled on by anywhere from 10 to hundreds of users.
And it's so intense, the amount of interaction they're receiving, the amount of tweets, that the only thing they can do is actually walk away. There isn't enough time to mute and block everyone, because you have to go tweet by tweet to report, right. So it's a very, very slow, very automatic, very labor-intensive process.
But I like to say what Martin did was actually he engaged in what's called a hate retweet.
There's this function side of Twitter where you can take a tweet, and you can quote it. And it's embedded in your tweet, and you can say something. What's fascinating here is Martin says at the top, he says, to clarify, "To be clear I'm directly making fun "of this woman for tweeting at me.
"No one else should do that.
"She is three pounds too fat." So if you were using any kind of sentiment analysis, I don't think this would rate as harassment. But I'm an ethnographer.
And ethnographic research encourages us to go and look at the context of what is happening.
So could you build a contextual machine learning system? In this case, you would need an ethnographer to guide that. What's important here is to notice the dynamics of who is being put on blast, if you will.
Kelly has a lot less followers than he does. Part of what's fascinating about this interaction is that Martin knows exactly what he's doing.
He knows that this is going to send his followers to her account and to her page.
And all of a sudden she's gonna be bombarded by hate. And he doesn't have to do anything.
And what's fascinating about this is there's no policy inside of Twitter that directly addresses this kind of interaction, right. Like, there isn't a rule that's like well, you probably shouldn't be a dick to that person and send your 200,000 followers to someone with 10, 'cause you quote tweet something stupid that they've said. Because even outlining that right now, that doesn't sound like good policy.
But we're in this space where tools can be used and misused, and the interaction design of Twitter can be really misused to amplify people at the bottom of a totem pole, at the bottom of a social ecosystem.
And there's nothing that Kelly can do.
She can't turn off replies on this thread.
She can't remove her tweet.
She can't block any of his supporters.
Because even if she blocks Martin, he can still quote tweet her using the URL, and his followers can still respond.
And that's also a broken flaw within blocking, right. But the reason I point that out, though, is that there isn't really necessarily a way to describe what he's exactly doing, other than us trying to make up new words to point this out.
And it's important to show that, because Kelly is trying to then document what exactly is happening to her.
And all of a sudden, it's a lot harder to say well, he's doing this and all these people are jumping on me, because his Tweet looks really innocuous to the outside viewer.
What's really fascinating is the United States freedom of speech when it comes to online harassment is a big deal. And I don't think this is a freedom of speech issue. I think this is a design issue.
And like I said earlier, Kelly has no tools at her disposal, right now, to sort of handle or mitigate this situation. And it could take days, weeks or months for an abuse report to be responded to inside of Twitter. And she just has to wait.
And this freedom of speech issue is something that comes up a lot in the United States.
Over and over again, we start talking about online harassment.
We have a public discourse around it.
And this consistently came up against opposition from the product team that favoured, so this is something that BuzzFeed wrote about specifically at Twitter.
I'm gonna read some of it to you.
Something that consistently came up inside of the product team was this idea of content based filters preventing abusive tweets based on keywords over context based prevention, meaning identifying and stopping harassment based on accounts involved around the subject matter. So in one instance, someone inside of Twitter argued that the company's product safety team had developed an algorithm for filtering out the word cunt. Well, in the United States, it's not a great word, but that's not true of the Commonwealth at all. Cunt is way too broad of a word, and it would censor accounts, especially inside the UK. So if you try to focus on keywords and the numbers of blocks and unfollows, you end up with an algorithm that doesn't really have the kind of precision that you need.
But what's fascinating is that there is no talk of how do you allow users to start to create and define the kinds of keywords they would want to block themselves or the kinds of actions they could use to filter out this kind of behaviour.
And the ex-CEO, Dick Costolo, said of Twitter with an exit interview in the Guardian is "I will directly say this.
"I think that regulation is a threat to free speech." And that regulation he's referring to specifically is anti-harassment measurements. Anti-harassment measurements that he actually refused to engage in when he was CEO of Twitter.
So I know it sounds like I'm talking a lot about Internet fighting, but I actually love the Internet. And I grew up on 4chan.
And I grew up knowing about Usenet and pretending to be way older than I was in AOL chat rooms, which I'm sure horrifies my mother.
But I really, really believe in the Internet. And I believe that we're allowed to fight on the Internet. And I believe people should engage in this kind of interaction.
But I think we need to start to think about how do we design tools around emotional trauma and emotional data. So this is an exercise I like to walk a lot of different social media companies through. And I've worked with a fair amount of them, especially in Silicon Valley.
So don't add any story to this photo.
Just imagine it's four people, and that's it, right. It is literally four heads inside of a really nondescript blue bucket.
So let's say someone asked to remove the tag. You can totally do that on Facebook now, right. That's great.
And then someone asked to remove the photo. Okay, they actually can't do that right now. The only person that posted the photo can.
But I think it's important to start to ask why. And this is now we're rebuilding context into the photo. One thing I learned from Gamergate, and specifically interviewing victims of Gamergate, was this idea of being uncomfortable.
So someone would say, "I feel really uncomfortable." It was my job as a researcher to ask, "Well, why? "Why do you feel uncomfortable?" And all of a sudden, you get a lot of context, right. I don't like the photo is a very basic thing around wanting to take a photo down.
And oftentimes Facebook will say something now. But uncomfortable can actually mean a lot of different things.
I'm afraid to lose my job.
I'm afraid to upset my family.
I'm afraid to upset my peers, or I feel unsafe. So now let's build context back into the photo. What if it's four women drinking alcohol, right? And someone's afraid to upset their family. What should we do with a photograph? What if it's four women at a gay bar, and someone hasn't come out yet, right? And they're afraid to upset their peers? And right now in the political climate we live in, these are really important considerations to take into account.
What does safety look like inside of design? And so, if these are a series of radio dials inside of a UI that's for a reporting mechanism, all of a sudden, I feel uncomfortable takes on a completely different meaning than I just don't like the photo.
All of a sudden, we're having a real conversation about people's safety.
So what does it look like to start to distil safety and emotions inside to a series of widespread design choices that hit a variety of different users, that hit a variety of different kind of user personas? So an option could be image hidden from all extended networks.
Look, this example is not perfect at all.
And this is always the first example I start with, because it's decidedly not perfect, right.
It's a good starting place.
An even better example of how this is not perfect in this scenario is this counterpoint.
How could this reporting feature be used against political activists, right? How could Gamergate use this against Anita Sarkeesian, one of their main targets, right? So all of a sudden, every photo that Anita, so every photo and every post Anita Sarkeesian puts on social media is immediately flagged as inappropriate content, right.
And that happens with everything she posts. So all of a sudden their system is a lot harder when people are trying to co-opt it against victims of harassment.
But I think it's still a good thinking spot. It's a good starting place.
It's a good design thinking exercise.
When it comes to thinking about the ways in which conversation exists inside of systems as big as Facebook.
So whenever we make the changes we bring about, they're not really neutral in these spaces. It's not solving harassment.
It's thinking about every interaction as a very nuanced thing when it's distilled into design. And how does this affect policy? I mean, let's look at Turkey, the country, and Twitter, for example.
Turkey just went through a coupe, and one of their leaders, Erdogan, reached out to Twitter to say, "Hey, I think these journalists "are a part of the coupe." These are actually journalists that were just critical of his regime.
Twitter removed their accounts.
All of sudden, we do need to talk about design and policy. What kind of policy do we design, especially as well, in my case, a white Westerner, a white American, how does that all of a sudden get distilled into places that have great political uprising, great political upheaval? All of a sudden, the code and policy I'm making has a lot more political ramifications that I could ever consider, 'cause I'm not designing for people that look like me.
I'm designing for the world.
So one thing to think about is every post is an individual ecosystem.
I like to say that there are four different kinds of conversation, and I know that sounds really narrow. So there's the town hall, which is this situation. I'm actually not sure if this is being live streamed, but if it was, then that only further reinforces the example. So I can see all of you.
There are actually probably some people I can't see. But I don't know all of you.
I have no idea if you're recording me.
I have no idea if you're live tweeting this. And I have no idea who you're sharing this with, right. Some of you could have phones open and dialled to outside entities.
I have no idea.
This could be on Facebook Live.
So the idea of the town hall is you actually aren't really aware of how many people are listening to you, but you understand it's incredibly big and public. And that's what Twitter effectively is.
The next kind of conversation is the front porch. So it's a couple of people sitting in an open space that has sort of the illusion of privacy, but people can walk by on the street.
So it's a little less public but still fairly public. The next one is the living room.
You can see everyone around you.
You're having a series of intimate conversations, but it's not the most intimate conversation. And the most intimate conversation is what I like to call the bedroom.
So all of our conversations are incredibly complex and incredibly nuanced, because humans are complex and nuanced.
But our tools for communicating need to be as complex as our conversations.
And currently the large tools we're using treat conversation as though we're in a town hall, and they're designing it to pretend like we're on the front porch.
So what if Twitter could build in more context for users, right? What if we could say hey, every time you post, here's the amount of followers that are looking at a post? Here's the amount of non-followers that are looking at it. Would that change the way in which we talk? Maybe it would, maybe we wouldn't, but it would still give users a lot of understanding as to here is who's actually engaging with your content. What if we went even further, and we started to allow people to denote specific kinds of conversations per post? Not full stop their entire profile, right, every post must be followers only, but what if they're allowed to pick and choose the kinds of conversations they wanted to amplify or de-amplify, the kind of content they wanted to share.
What if we could let people be semi-private in public spaces? And if this seems a crazy design, it's not, because literally Periscope does this.
So here's a really janky, old wireframe I made two years ago, when I first started talking to people affected by Gamergate.
And the reason I bring this up is specifically this. At this conference called The Conference, this past August in Sweden, Clive Thompson, a writer, had talked about the beauty of building and tailoring, specifically inside of spaces like Minecraft and how there's this incredibly young generation that all of a sudden is learning all of these great customizable tools inside of a space, and they're treating Minecraft like a social media spot. They're talking to their friends.
And they're having to work things out.
And they're getting to build the kinds of communication they want.
And his point was that these kids are gonna be so unhappy when they start to enter social media and the kinds of social media we've designed, because there isn't any customization.
And I thought about this idea of customization a lot. But how do you design customization for all different kinds of users for all different kinds of personas? And I started to think about what if you could just make broader filtering? That would fit a variety of user needs.
There's a lot here, but I urge you to consider that users need a lot of filtering options, because the billions of social media users are incredibly different with incredibly different needs, and they communicate very differently across different topics, but we're all having to use the same tool.
Users need agency.
So I'm gonna walk you through what some of these are. The first one says allow followers of followers to tweet at you, and that's pulling from this idea from PGP or encrypted email that your circle of friends is probably a pretty safe space to be in. The next two come directly from Gamergate.
So it says do not allow accounts with less than, a user can add in the number, followers to tweet at me, mention me or follow me.
The next one says do not allow accounts less than, enter a number, certain number of months, to tweet at me, mention me or follow me.
One of Gamergate's main tactics when engaging in harassment would be to create a new account the second they were blocked.
So that would mean that right now if people still held on to these secondary and tertiary and fourth, fifth, sixth different accounts they've created, some of these accounts would probably be at this point at least eight months old, right. Some of these would be a couple days old.
We all know that like, I read there's been a lot of studies done that like egg accounts are troll accounts, but you can't actually filter those accounts out right now, but what if you could? What if you wanted to filter out all users that had less than, I don't know, 2,000 followers, 'cause you really only want to talk to your version of the cream of the crop? Would that be wrong? No, I mean, I think users should be able to do that. In my case, I'd want to filter out probably people that have accounts less than a month old, because I am sort of tired to talking to Gamergate trolls. (chuckles) Okay, great, so what about other communication spaces, 'cause I've spent a really long time talking about social media.
And I think these ideas I have specifically can be applied to journalism, which is one of the reasons that I'm at BuzzFeed.
And I've started to think about that.
How do you take this idea of security filtering, of reachability, of trying to design different kinds of conversation, how do you apply it into the commenting section? So what you think about the different kinds of conversation people have, the hard, the mean, the awful, the whimsical, the funny and the tragic? And we have these in a variety of different public settings. And some of those public settings are newspapers. And I read the comments.
I read every comment.
And earlier this summer, I spent six weeks working with the Coral Project, a Mozilla Foundation initiative to make the commenting sections of newspapers better. We sat inside the New York Times, and we worked specifically with the Washington Post. And it asked me, for six weeks, can you take your Twitter ideas, your ideas of social media consent, and try to translate them into some wireframes for us? Can you help us design better conversations in the commenting section? And that is such a big ask.
But I think it's possible with the Coral Project, 'cause they've spent nearly two years talking to journalists and readers, and they have amassed such a large amount of ethnographic data.
And while I was there, we ran a survey specifically talking to freelancers and journalists about how their newsrooms handled online harassment, specifically when journalists were the target of harassment. And were they published and what it looked like? The Coral Project works specifically with designers and content creators and producers that not only have backgrounds in online content publishing platforms but also journalistic sites, and they all have design backgrounds.
But also every person that works at the Coral Project writes.
And they've written for blogs and news sites. They are publishing nerds, and they are the best people equipped to solve this problem.
And so here are some of the wireframes that I made for the Washington Post.
And one of the things I started to think about is how do you start to weed out trolling comments specifically? Now this doesn't mean allowing people to create filter bubbles.
And this doesn't mean allowing people to block other users. And I should go back to one of the things we had talked about was how do you also let users filter out certain words? A lot of journalists who are people of colour would specifically get racial slurs. How do you make it harder for people to engage in that kind of conversation inside of their articles without having to resort to the labour of deleting those comments? You create a word filter.
And that's something that's pretty lightweight and easy to do.
So one of the things we had talked about in posting specifically is what are small barriers to entry that would sort of force people to think about what they're posting or make it harder for a trolling comment written off the cuff to exist? So you have the word filter on or off, and then you can start to select the different kind of commenting style.
So logged in comment view means you actually have to be logged in to see the comment the person's about to write, meaning you can't just go to the page and not be logged in.
One of the reasons this works is the Washington Post really wants people to actually talk to each other.
They also want you to be logged in.
Another one is replies off, so you could post something where people can't reply.
But to do that, you actually have to go and then re-read your comment.
Another section says you're sending this directly to the editors, and again, you have to re-sign in, with a caveat that you have to obey all the community rules. And the last one is just everyone.
And then you can start to look at, after you've posted all the different kinds of comments. And so I include this slide, 'cause I also try to reinforce how prevalent online harassment is, 'cause about once a week, I get one of these. They're almost always from women and always know the steps of what to do and what to do when you're being attacked.
And it's always this long, sustained moment of panic. And I'm having to walk someone through it.
And it's hard.
And our systems for reporting are incredibly difficult, and they ask a lot from people when they have to report. And that's so hard when you're in the midst of emotional turmoil.
And I feel like this shouldn't be the barrier to entry for wanting to exist in public, for wanting to exist on social media, knowing that you're going to face harassment, especially if you're a woman or a woman of colour. So I mentioned earlier that I love the Internet, and I grew up reading 4chan.
And I fundamentally believe people are allowed to be dicks on the Internet.
I just think that if you're a victim of harassment, you shouldn't have to hear it.
And so how do we start to think about that? How do we start to distil all the wonders of the Internet, all of the cat GIFs and also think about the fact that we're engaging in very intense turmoil in political discourse as well as just harassment? And how do we make sure that we design the Internet to allow for all these expansive kinds of conversations. People are allowed to be awful on the Internet, but we need to give the victims of that a way to mitigate what they're receiving.
People should have a choice in the kinds of conversations they engage in.
This is something I often tell the technologists. So design isn't a skill, isn't just a skill, but it's a practise and a language in and of itself. To start solving problems around harassment, we also need to think about creating spaces for human connection and design needs to be at the forefront of that conversation.
This is not just a policy conversation.
Because design is the thing that distils policy in code into a digestible and interactable interface for users. Design is that thing, the thing that explains what code and policy are outlining.
And when it comes to harassment, design is incredibly important.
And design is incredibly, incredibly political. So it's important to think about who gets to design how and where we talk and how we make communication less of a black box. Thank you.
(audience applauds) - [Host] Thank you very much.
- Oh, you're welcome.
(host laughs) - I'm German.
I tend to shake hands after everything.
It's like, "Oh, we're shaking hands!" (Caroline laughs) - [Caroline] Great.
- This is an incredibly important topic, and you've raised, rightfully so, that we have to address it on a systemic level and on a design level.
Is there something when you work with other companies, and you sort of expose possibly what Twitter or Washington Post or anyone else could be doing to allow for an easier or more consent oriented environment? Do you sometimes find opposition there, or is it generally well-received? - I think it depends upon the company.
So like, I do a lot of design research talks at really big companies, and I do a lot of, like with, for instance, the Coral Project, I was brought on as, like, a fellow and a freelancer. And it was specifically because of the work that I did. So they were already, like, very excited and willing to engage in this conversation, 'cause they also already knew my work.
But there are times where I definitely receive a lot of like push back from really big companies, who will go unnamed.
But I think it often depends upon who you're talking to. And that's often, and it also depends upon your delivery. So I work with a lot of different kinds of anti-harassment activists, for example.
And we have very different deliveries.
Like, I'm not a let's burn all the bridges. I'm much more like how do I try to distil all the problems people are having into something really easy to understand.
So whenever I approach a company, I am often like, "I looked at all of your policy, "and I went through, and I actually did a design audit "of your entire interface, so here.
"Do you want to bring me in for a design talk?" And oftentimes, even if they don't listen, that's often greeted so well, because I'm already speaking the language of their working end.
- Mm-hmm, great.
Now when we look at sort of ZeroUI and conversational sort of interfaces where Amazon Echo or Cortana, they might have some personality written into them, but they're also an interface for potential bullying, right? - Totally.
- Somebody could get really excited and sort of flame somebody verbally and off it goes. What's your take on that? - Well, I actually know the team that writes behind Cortana, and they're a really fantastic team.
But I don't know if you all know this.
Earlier this year, inside the United States at least, there's a lot of public talk around Cortana was receiving a lot of sexual assault. And when I worked at IBM Watson, for example, I'd have to do a lot of, like, audits and talk to a lot of very different clients. So the thing that struck me about that is someone in my job had to go through and, like, look at all of those audio logs, like, listen to them and read them if they were transcribed. And then create a deck to show the higher ups like Cortana is receiving wide amounts of sexual assault, and we have nowhere inside of our language corpus to address that.
We have no design around the conversations for Cortana to fight back or rather end the conversation. And then that led to the Cortana team actually really focusing on how do we address this when people interact with our bot? Like, do we as a company really want Cortana to receive sexual assault, and then how does that fit into our policy? And that's still, like, a design problem, right. People like to push at the edges of things, and if there's no push back, they'll go with it, because it's not human, right? But I don't think that there's enough realisation of how many of these processes inside of the company are actual all humans.
Cortana is an entirely human team, right.
Like, people write Cortana.
People work on Cortana.
People train the ground truth of Cortana.
People code Cortana.
So much of machine learning, especially when it comes to working with bots, it's all people.
It's very humanistic.
- Yeah, and maybe it's smaller companies than Microsoft are now sort of jumping on the bot waggon, and they're creating little shop bots, and they get abused, right.
But they don't have the means to probably figure this out on their own.
So do you think like there's a standard of civility or consent maybe developing in software? - I think it's a conversation people are having. Google's like social justice think tank is called Jigsaw, and they've spent the past couple months working on Wikipedia information to try to create a corpus around how do you automate, like, harassment reports, and how do you also create a system that can catch harassment? But the thing about this is that there isn't, so a data corpus is like a collection of information, and that's the information you feed a machine learning algorithm to then train the algorithm. So if you're going to train an anti-harassment algorithm, you actually need a lot of harassment.
You also then conversely need things that are not harassment, so the algorithm can learn from both.
One of the issues Jigsaw is having is that they can kind of, they can recognise what I like to call flamboyant harassment.
I'll let our imagination run with that, but it has a harder time recognising that flamboyant harassment when you start to add more words to it, like very, so, much, eh, okay. Like, when you start to add in actual more of like a linguistic vocabulary, like linguistic slang we engage with, right, it actually has a hard time with that.
And I think one of the reasons it has a hard time is we also don't have what I call like a feminist data corpus of like what is considered feminist language, so then a bot can learn that there is non-feminist language. So if you can start to pinpoint bots to different kinds of conversation that's considered good conversation, meaning if you can create a bigger data corpus around different kinds of the kinds of conversations you want, it's easier to train a system to see what is not a good conversation.
And that's kind of a problem we're in where a lot of the stuff isn't actually like open source. It's hard to engage with.
- Excellent.
There's one question here from one of our fellow speakers Mark Pesce.
What's the difference, and you're free to answer this or not, because it is political.
What's the difference between Wikileaks, for example, doxing a Turkish person and the Gamergate's sort of doxing someone they don't like? - I think that they're equal.
The reason I brought that up, and I probably should have explained more inside of the presentation, is that Wikileaks had historically, I would say up until this Presidential election, had been viewed as, like, a force of fairness in the world. Like, they were one of the main spaces when it came to political whistle-blowing, and they were seen as like a necessary force.
So when I say that, like, that's historically how they're known, should they be tried in the same way for doxing innocent Turkish people in the way in which Gamergate is? And they often get a pass.
And I don't think that that's actually right. But how do you then, like, look at a system, or look at groups that have engaged in, like, political betterness or political do-goodery, and then they engage in something that's considered not right, so how do you then start to differentiate that? It's really easy to chastise Gamergate.
It's really easy to sort of put them always on blast. Things get complicated when you have to start defining what doxing is.
So for example, like, you know, if Twitter does come out with a rule like no doxing full stop, what happens when you actually need to dox like authoritarian regimes, right? And all of a sudden that's a different sphere of oh, well, we need that.
Well, what happens when the group that does the thing that we need, does a thing that's actually really not good? And that's complicated.
And that's complicated to train any system around. That's incredibly complicated to write any kind of policy around.
So that's more what I meant.
Like, how do we handle that from like a design policy standpoint? I personally think that, like, I think it's abhorrent to dox innocent Turkish people.
Like, I am so not pro that.
- Well, thank you for clarifying that.
I think you've made very clear that we're right in the middle of that debate or even just at the beginning of that debate. And I want to thank you for putting so many valuable thoughts out there that we can sort of chew on for the rest of this conference. - Awesome.
- Thank you very much. - Yeah, thank you.
(audience applauds)