- So if you think back to when you started on the web, some of you started probably a long time ago like me and some more recently, but over almost imperceptibly, we've asked the web to do more and more things for us. When we very much started, the web was essentially like a bit of paper, we didn't even have images, we certainly didn't have interactivity except the click. And it was a very informational, one-way broadcast medium. Over time, we've just made it do more and more, and that's exciting, and that's, I think, revolutionary, and I think it's only going to continue, but along the way things get more serious.
And one of the things we're doing all the time, and I think there's not a person in the room who doesn't do something that does something like this, is we gather a sense of the identity of our user. It might be explicitly, like they might make an account on our app, on our website, and give us credentials and prove who they are so that we let them look at their bank balance and transfer money or pay their taxes or do any number of things that they may and have every right to have privacy about. Right, so what we're doing is, we're very very serious, but we have this genuine problem in that bad people really want to know your bank balance and all those other details.
They even want to know things like, you know, have you joined up to a certain dating service, or, because there are ways in which they can use that to their advantage. And so the things we build expose the people who use the things we build to a whole lot of potentially really bad outcomes. And to be quite honest, I'm not sure we always take that as seriously as we should.
I'm not saying anyone specifically in this room, but I sometimes think, well, the fact that even major organisations still do these horrendously insecure things to this day.
Not just on the server, but you know, sending passwords in clear text and all that kind of stuff, not doing everything over TLS, right.
Seems to suggest to me that we're still not taking this stuff seriously enough.
The other thing is, I think we've long seen this as an engineering problem, a bit like performance, which is also not just an engineering problem as you know if you've heard me rabble on about that for the last couple weeks, but, I think it's as much if not more a design problem, a user experience problem to worry about our users' privacy and security and so, this is something actually been on my mind for quite awhile, and in the last few months, I've become aware of some work that our closing speaker for today, Rachel Ilan Simpson has been doing around this area, it's a particular interest of hers and it made perfect sense to ask her to come and speak on that.
So Rachel is a UX designer working on the Chrome team on Google Chrome, and they basic, look after everything that happens outside the web page in the browser, the whole user experience.
She has a particular personal interest in privacy, authentication, and security from a user perspective and I've asked her to come and share her thoughts around that.
So to finish out for today on this serious, but really also engaging topic, would you please put your hands together and welcome Rachel Ilan Simpson.
(applause) - [Rachel] Thanks so much, John, thank you. (John laughs) I wasn't sure but you know, I'm Canadian so, the combo deal is not, the awkwardness, it just goes up, OK.
All right, how's the, how's the audio? Real good, OK.
So, as John already said, I'm Rachel, I'm here to talk today about the tension between usability and security, but before I do, I'm gonna take us all a little ways back, um, and ask, do you remember when our purview as web designers was a Geocities website? We're doing it afterwards.
And the most important decision we might make is what kind of hit counter to put on our page. So in recent history we've gone from designing this to this.
And as our world moves more inside of our phones and digital problems, we as designers along with engineers have a very important role. So we've become the gatekeepers to your most personal e-mail correspondence. Your online banks.
And even your medical data.
For digital security, the stakes have never been higher. So I expect that many of us in the room, actually, can I get a show of hands, uh, are you a developer, hands up? OK, designers? Some kind of hybrid space? Different role that I haven't mentioned? What do you guys do? (member of audience, mumbles) OK, so there's a technical aspect of that but it's more on the, like SEO? - [Voiceover] Yeah, but (mumbles).
- All right, explained, yep, good.
What about you? Where was the hand? - [Voiceover] I'm a project manager.
- Oh, one project manager, well done.
OK, so, because many of us in the room touch, directly, user security, I think we all know how challenging the problem can be. In fact, in the context of increased risk, I'm getting a little, am I getting feedback? - [Voiceover] Yeah. - [Rachel] Yeah?
I'm gonna move this down a little bit, maybe. Is that a little better, yeah? So in the context of this increased risk we often think of attacks on security a little bit like this.
We think of them as sophisticated and complex. But the reality, however, is often much simpler. Someone comes and asks your user for their password. Or they write it down, someone exploits their non-secure behaviours.
It's been said that the user's the weakest link in the security chain because human behaviour often contributes to security failures.
So the question that I wanna ask and answer today is, are users really the weakest link? OK, so the brief introduction, I'm Rachel Simpson, I'm a UX designer on the Google Chrome team. And I care about making things that are important easier to use.
And I co-wrote this talk with Guy Podjarny, He's the former CTO of Akamai and founder of a new company called Snyk, he's got a very, a technical background, security engineering, and I'm gonna tell the story since John and I have discussed this so often, how this talk came to be.
So Guy and I were sitting together in the back room, we were both speaking at a conference called Smashing Conference which many of you might have heard of, in the back room, you know, making polite conversation as we waited to go up.
And somehow, just maybe an indication of both of our personalities, we managed to get into somewhat of a heated conversation about, uh, again, the tension between security and usability. And after the smoke had cleared we decided that we would write this talk together. And the reason that we did that is we want designers and engineers to better understand why security interactions and UI often don't work well for users.
And I want us to better articulate it, and do a better job of working together to improve those interactions so they work better for people.
OK, so what's on the agenda? First I'm gonna talk a little bit about why people do what they do, what are these insecure behaviours that they engage in, and why do they make those choices.
I'm gonna give a little bit of context, I'm gonna talk about these behaviours in context with a few examples of security UIs, so passwords, HTTPS errors, SSL interstitials and phishing, you can see they're getting yet more exciting as we move forward, and by the end of the talk, I'd like you all of you to be able to better define and defend the progress that we need to be able to make towards more usable security.
OK, so back to our big question.
Are users really the weakest link? Well, can I get a show of hands, who designs a product here? Or builds one? OK, works on one? All right, does anyone have a product in the room that doesn't have any users for it? Humans do not use the product, show of hands? All right, there was one in the last conference, too. What is your product? (audience member mumbles) Oh, OK, it would be intended for, OK, so, my point with this is we're only human.
And unless we're planning to design software that won't have users, it's important to recognise the nature of human perception.
Non-secure behaviour isn't a matter of just a few people, just some users, a few bad apples, someone who's less savvy or lazy.
The rules apply to all of us, they're the ways in which we understand and interact with the world around us. So there's a few factors that I wanna talk about that are gonna be relevant for different examples we talk about today.
The first is memory, that's clear what I mean by that, it's about whether one can remember or recall a piece of information at a given time.
And while we often think primarily about our own products or features, they often fit within a larger ecosystem, and it all adds up.
So for users, this can often lead to accumulative demand on their memory.
Next is attention, or about focus.
So as the creators of products we often forget, it's not all about us, users have their focus trained on completing a certain task, so peripheral details within the product that relate to security can often fall away or be filtered out.
Basically, people aren't that good at multitasking. In a related vein, as users step through each interaction in your product, they make many small decisions on the way, and this has a cost.
Imposing the burden of a higher cognitive load by failing to communicate.
It's a mistake that we shouldn't make.
And this last example, it's about previous context. You can think of this as technical expertise or simply experience.
But it's important to note that users' previous interactions make up their mental model.
It might be hard for them to recognise the implications of something that they haven't seen before. OK, so we'll put this all together, I'm gonna start with an example that everyone can relate to. Passwords.
So, I think we all know from our own personal experience if not from the data that passwords don't work well for users.
It's become common in some circles to refer to users as the weakest link in the security chain, and while that debate has evolved, fortunately, users are still struggling.
We know that people engage in non-secure practises like writing down their passwords, using passwords that can be easily guessed all of these things kind of come together.
But the big question is, why exactly are passwords so hard for people to use? You'd think it would be simple, right? The system's made up of a username.
And a password.
And of course, in order to be secure, you'd have a different username and a different password for every account. Then you might also have to remember if you've changed the password recently, and of course, maybe you created the password with certain policies in mind like the length or whether you can use certain characters.
OK, so that's four pieces of information, it's not as easy as just your username and password, but, hey, who can't handle that? Well the other important context to keep in mind comes from a scan of 20,000 e-mail accounts run by a piece of software called Dashlane Inbox Scan, and it found that there were, per e-mail address, 130 accounts for each American user that they scanned. There were different numbers for different groups of people from different countries, so rather than looking like this, this kind of piece of information that your user needs to call up in the moment they want to sign in to their account, it looks a little bit more like this.
Whoo, OK, everyone just go, whoooo.
Ready, go.
- [Crowd] Whooo.
- Yes (laughs).
So the big takeaway here, I think, is that memory is a limited resource.
Information gets worse, or information recall gets worse with time and infrequent use, and especially if the thing you're trying to remember is random instead of meaningful.
Which is exactly the kind of thing that we teach people to use for their passwords. I'm gonna reference a XKCD.
This comes from a comment many of you might have seen, and it says, we use passwords that are hard for humans to remember but easy for computers to guess. It's not any wonder that users give up and engage in non-secure practises.
After we've forced them to remember something like this. OK, so at this point I should mention, I've already got into a discussion about this today but there will be more afterwards, so password managers are good, and I think, I give password managers a hard time because I change devices a lot, and so, they become, add a lot of complexity to them. But they're good, and they're better than what a lot of people do, but, for all of us as we're building our own products, you can't control whether users are going to have a password manager or not. You have to remember that a good portion of the people that you're designing for may not have a password manager.
OK, so there are a few other challengers that users face when it comes to passwords.
When creating them, it can be difficult to tell if a password is hard or not.
So people tend to figure out certain patterns. Like making the first letter uppercase and putting digits or symbols last, do you do that? (laughs) And again, this is easy for attackers to guess, but hard for users to remember.
On top of that, another strategy is that attackers enumerate usernames with common passwords. So, they don't care necessarily whose account they're getting into as long as they get in. So, given that the top 25 most common passwords account for about 2% of all passwords, they're able to enumerate the usernames that they've got with the common passwords and evade lockout mechanisms.
And this seems to work quite well for them. Lastly, it's common for people to reuse passwords. So even if your system is secure, it's very likely that users are using the same password with other accounts, so if another system gets hacked, then attackers are likely to try the same credentials on your system, too.
There's a positive message at the end (laughing). Everyone looks so depressed.
So what is it that we can do? Well, first on the design side of things, there are some interesting explorations out there, and I should add that, you know, nothing is a complete solution yet, but I want everyone in this room to be inspired and excited to try out new things, so the first example I want to share comes from Medium. They allow you to sign in simply with your e-mail address. You type in your e-mail address and you click sign in. And they sent you a link.
So they you a link to your e-mail account, you push on the button and you sign in to the account. This is interesting because it lets you sign in through what it assumes is a more secure system. In this case, Gmail uses two-factor authentication, so when the password is compromised, then the account itself remains secure.
OK, so I'm gonna talk about two-factor authentication very briefly.
I still feel like I'm getting a little echo there. Um, so can I see a show of hands, who knows what two-factor authentication is? John, you're doing well with your community, nicely done. OK, so for anyone who hasn't heard or doesn't know what two-factor authentication is, it simply refers to two of three factors to identify yourself when you authenticate. Those factors are something you know which is hard to guess like a password, something you are, which is hard to fake, like biometrics, so a fingerprint for example, and something you have which is hard to copy, like an RSA token that we show here.
If Guy was here he would comment that biometric is better but it's not easy to do on the internet, and you can still break in, that's true.
And tokens aren't necessarily more secure they just have different problems.
So tokens like this make you more vulnerable to nearby attackers, but you're less vulnerable to people who are far away.
It's always a trade-off.
But I think one of the big downsides in the past of two-factor authentication has been the clinkiness of these kinds of tokens, right, you know, you've gotta get all of your users, you contact them, and you say, I'm gonna send you a token and then you send them the token and then they say "I got the token" and then they say a week later, "I lost my token" and, "My token is dirty" or it's not working anymore, so there's a lot of sort of management of the logistics of that system. For something that I think a lot of users are a little intimidated by.
But fortunately, now we have, we all have phones or many of us have phones, so there are some options out there, like Google Authenticator and Authy that allow you to, again, piggy-back on the existence of these handy hand-held devices.
Alternatively, you could always defer like Medium has to Facebook or Google or Twitter, or whatever other system is using two-factor authentication which I think is an interesting solution.
There are some great solutions out there, both technical and design, but ultimately, we need to sort of need to find them ourselves. Which is why I've kind of collected design principles that you can take away.
First is, for designers, finding new, more flexible solutions like this seems like a really great approach to improve the user experience. So be more flexible.
But not too flexible.
You're only as secure as your weakest link. So if you're piggy-backing, for example, finding the primary account which is using two-factor authentication is obviously a must. OK, so I have a little interactive challenge for everyone today.
We're going to, on the next page, please spot the security information, I want you all to look for the security information. Those of you who were at the last event don't tell them, I'm looking over there.
OK, so, can I see a show of hands, who sees the security information? Ooh, these guys are better than the last ones. I saw one, two, three, OK, yeah? A few people, pretty good.
All right, if you don't see it, don't feel bad. So, I'm gonna, John saw me last time, I'm trying to remain calm.
This is the security information on the page. It's the little tiny triangle on the top left and it indicates the connection security of the page. So that's where that is, yeah.
I have to try to stay calm when I go through this section. OK, so the way that we expect users to interact with the security information is to monitor it, continually, as you step through the web. I'm stepping out onto the page.
OK, I'm checking on this thing, good.
I'm going, I'm checking.
And it's really, we expect them to just kind of every time they click on a link, at every time that, OK, I'm calm.
OK, so we expect them to notice that there's a security risk or relevant problem and then take the appropriate steps, like, for example, not entering sensitive information like bank details into a site which is not considered verifiably secure.
So we think that users are engaging in this perfect behaviour, that's how we kind of imagine it, but in reality, their interaction with a web page looks a little bit more like this.
We have tunnel vision.
We're thinking about what we need to do, and not on the other aspects of the page.
And we filter out information that's not relevant for us. So we as designers need to remember our user's attention is focused on the task at hand. So from our perspective, we need to think about how we can refocus it at the right moments. And I think there aren't a lot of great examples of this out there at the moment, but for example, using animations to get the user's attention when a security risk is present might be a good starting point.
And keeping in mind, of course, that peripheral motion demands attention, so use it appropriately and sort of not too often.
OK, so in our previous example we talked about the little triangle and most people aren't going to notice it, but say that the user did notice it.
They're one of those two or three people who stuck their hand in the air and they thought I'll click on that.
So, I'm gonna read this out loud, so they click on the icon and they actually read the description, and here's what it says.
They say, "This website uses a deprecated "signature algorithm based on Shaw One." Oh, obviously.
No, even, I work at the Google and I don't even know what that means.
No, it's been explained, but, what are we expecting the average user to do in this case? Most users, I think, wouldn't understand a single word in this sentence.
Well, "use" and "a".
OK, so there's a nice example that I have here from Firefox lately, they do a good job, they say simply, "the connection is not secure." This is really clear and users can understand, kind of, what that means.
I think, even better, I like a message that's more specific. So it could say, for example, "Your connection is not secure.
"Please don't type in any sensitive information right now." So the big takeaway for this section is with security information, be timely, so security information should appear in the moment when it becomes relevant.
And it should be meaningful.
It should explain in clear terms what the implications are and what the user should do.
OK, all these examples get more and more technical, so, it's gonna be fun.
This one, I think, is something all of you've probably seen in the past.
So this is an SSL warning from an older version of Chrome. And, I think it's Chrome 36, I'm sure someone here can tell me.
And it follows the previous rules that I just described pretty well.
It's not subtle, it's sort of timely, it appears. It's meaningful and kind of clearly says, you know, there's a problem here.
And it blocks your path when you're about to visit a page who's SSL certificate is not trusted.
Um, so again, this seems to be following the rules, what's the problem? Well, this is the problem.
63% of users continued through the warning to the page that they were going to.
And this seems a bit surprising, right? Please don't go to this page, it's pretty clear. But, so, if you look at it closely, it says, OK, the site's security certificate is not trusted. All right, maybe some people understand what that means. And then it says, I'm gonna read this, "You attempted to reach 172.20.0.1 "but the server presented a certificate issued by an entity "that is not trusted by your computer's operating system. "This may mean that the server has gen," and I'm lost, now, I'm done.
But, I'll skip through to the end.
"You should not proceed, especially if you've never seen "the warning before for this site.
"Proceed anyway." (audience laughter) So, this could be a little clearer, and what we have to keep in mind is that users are primarily focused on completing their task. And they don't tolerate a lot of extra decisions in their way.
So when this is combined with a warning that places this high cognitive load on them without a clear course of action that's being suggested, it's pretty likely that the users are just gonna treat this warning as a barrier between them and the desired action and, hey, they're taking the default action anyway, right? Proceed anyway.
So the Chrome team at the time noticed this was a problem, they especially noticed that Firefox was actually getting better, lower percentages of people clicking through their security warning similar to this one. So they said, we're gonna redesign the page. And they made this.
This page is great, it's interesting because, because it was created in an attempt to improve adherence to the error message and it uses pretty clear visual cues to voice an opinion and promote a default course of action. So you can see here, it's got a big red lock. It says your connection's not private, that could be a little clearer, but the biggest thing here is there's a big blue button, it says, "Back to safety." And if you don't want to go back to safety, in fact you kind of understand what's going on, you wanna move through the page, then you can click on the, high, the advanced link there, and you can read it and click on link below. I'm gonna get in arguments later, I know.
With this change, now only 38% of people continued through the warning which is a much better percentage than 63.
So this, we would consider this a success from Chrome's perspective, but I think the big takeaway for us, as designers, is this, it's that making decisions has a cost to your users. So we designers, we're often very focused on our Uis. And we think that users are paying attention, stepping through each interaction, every little detail, but in fact, they're just trying to get where they're going. And when they need to make a lot of little decisions on the way, it costs them. Without a clear indication of our opinion, they're just stepping over another barrier. So the big takeaway for this is, we need to offer an opinion.
In this case, for example, the redesign lowered the difficulty of the decisions, more people chose to take a default action, and a secure course of action was the clear one. So, I saved the most complex story for last. I'm gonna, take a little glass of water, here, before I read you this letter.
Maybe a little small.
So this is an example.
You may be familiar with this kind of thing, it's an example of something that's known as a 419 scam. But actually, this is interesting, it actually dates back to the 18th century. And of course, the modern version is the typical Nigerian prince e-mail that you get in your inbox. And it's a fairly simple story for anyone who's ever encountered it.
Someone is, you, someone is contacted by a stranger claiming to be from the royal family or from a big company or from some other source of social status.
And they ask the target to help them move money, you know, which, of course, requires providing some personal data, like details to a bank account number, you know, other information like this, and then, of course, in response, they would be rewarded for their assistance with some large monetary sum.
And so I'm gonna go ahead and read some parts from this letter.
"I seize this opportunity to extend my unalloyed compliments," I love unalloyed compliments, "of the new season to you and your family, "hopping that this year will bring more joy, "happiness, and prosperity into your household. "I am therefore seeking for a reliable person "that will play the human role as the (mumbling) "which is in the amount of 32 million pounds Sterling," which is how I always keep my money.
"Please respond immediately via the private e-mail address, "j_jerrysmith@aol.com." OK, so, this is obviously, we can see some strong indicators here that this may not be trustworthy.
In particular around the strangeness of the language and on the real, the letter, sort of the lack of real indicators that you might expect if you were contacted by the real J. Jerry Smith to move his sterling, like seals and stamps and signatures.
I mean, of course, the situation itself is impossible. So, in theory, people should be able to spot this. Guy and I presented this talk at a couple of more technical conferences, and so everyone recognises this quote.
I don't know if anyone here does.
Um, it's quite popular, it says, "There's no patch for human stupidity." And I think for many of us, especially because we're all quite technically savvy in this room, it may seem completely shocking that this type of play could work.
That people would send J. Jerry Smith their money
and give him their banking details.
But in fact the Threatsim State of Phish Study in 2014 found that 23% of users opened these e-mails, that's a quarter of people, and 11% on average click through.
Which is, still a fairly small percentage of users, but given the implausibility of this e-mail, it's a little shocking.
So I think the thing that we need to remember is You don't know what you don't know.
A user who has not seen or been made aware of this kind of scam could easily fall for it. When this particular lack of previous experience or awareness is combined with sort of this emotional story of helping someone out, being paid handsomely for your assistance, being the hero, it's very easy why, to see why someone might be taken in by a scam like this. And this kind of problem is a risk to all of us and the products and platforms we're building. Better put than you don't know what you don't know, users do not generally perceive the absence a warning sign.
But, I think I want to show you guys how difficult it might be to notice something that you're not aware of.
Can I just see a show of hands, who's seen the monkey business illusion before? OK, so if you guys could do me a favour, try not to giggle. That would be really helpful.
And then I'm gonna show you this video.
We're gonna start at the beginning though, so that'll be better.
- [Voiceover] The monkey business illusion. (tape rewinding) Here comes the gorilla and there goes a player, and the curtain is changing from red to gold. - Out of curiosity, did anyone not see the gorilla? OK, again, these conferences are pretty good, you guys are pretty on top of it, this is very, oh, interesting things are happening with the microphone. It seems to have disappeared.
There it is.
I'm just gonna stick it there for now.
OK, so that's a really great example of human limitations. For anyone who has been to a conference where it's not all techies, you'd be shocked, there's hooting and giggling, it's quite impressive. But, again, about half of people who haven't seen it before miss the gorilla, and similarly can miss hints that something is awry in the case of phishing. So the big question is how bad is phishing really? Sure, the scam I showed, uh, showed you could probably effect only a very small percentage of users.
And by learning about indicators like spelling and storyline, visuals, and the source, most users could probably become immune, and that's true.
But unfortunately, a sophisticated attacker can fine-tune a scheme and fool almost any user. So I'll show you some examples here.
So this is a story that Guy and I are able to retell, by the courtesy of Andrew Betts who is a security engineer, and I'll just tell you the story and you'll get the context. A couple of years ago, the Syrian Electronic Army launched a phishing attack against several organisations including the Financial Times.
Many users in the organisation received an e-mail like this one holding an HTML link that looks like it's going to CNN, but in fact, it's going to a malicious website. The link opens a page that was identical to the Financial Times corporate single sign-in. And it prompts users to enter their password. Now fortunately, the Financial Times is a pretty savvy organisation, and so while only a few users fell for it, most didn't and reported it to the IT department. They responded with a company-wide warning e-mail asking everyone to change their password.
However, by then, some of the inboxes were already compromised, and so the attackers saw that warning and sent a similar e-mail from the compromised e-mail accounts, but this time holding malicious links.
So then, of course, people went and changed their password, and they were able to get them that way.
So they even sent this, this is the level of sophistication, they even sent their e-mails with the compromised password link only to people who were not in the technology departments. So these e-mails mimicked the IT organization's visuals and wording, they looked like this, and they were sent from a genuine Financial Times e-mail address, and once again, displayed a login page that looked just like the true corporate login.
It's very hard to spot, it looks a little bit like this. So to make things even worse, some users open the e-mail in their phones. You guys are quick, this is good, I could just speed up. So they open the e-mail on their phones, so instead of seeing singlesignin.financialtimes.evil.org, they just saw singlesignin.financialtimes.
So again, Andrew Betts who's the security savvy, director of Financial Times' labs, who kindly, again, allowed Guy and I to retell his story, has written a lot more about his experience at labs.financialtimes.com, there's the address up there if you want to read about it, it's quite interesting. Fortunately, the destruction was minimal, but the experience was quite jarring for Andrew and a lot of people there, and so they subsequently have installed a universal two-factor authentication across the organisation.
So the big point of this piece is that a good phishing scam can fool just about any user.
And for us, designers and developers, the challenge is that phishing often happens outside of your system making it very hard to catch.
One last example, my favourite example, it's called Guy gets phished.
So this is a picture of a letter that Guy, my co-writer got in the mail.
It was shortly, he received it shortly after he incorporated his new company Snyk. And he had filed for a trademark in its name. So this letter asked him, it looks quite professional, it asked him to pay an invoice for the registration. It uses urgent wording and gives a tight deadline. So when he saw all this, he went ahead and called his lawyers and said, "Oh, I've received "my invoice, I'm gonna go ahead and pay it." And, you know, very fortunately, the lawyer said, hold on, wait, they'd actually already paid for the filing and this was, in fact, a phishing scam.
And he subsequently received a lot of letters like this, a lot of people have received similar ones. Oh, there's fun things happening here with the microphone, guys.
I'm just gonna stick that on and hope it works. Well, there we go, it's the end of the day. And so this is a really good example of first, how a computer-savvy user can be fooled by a simple phishing scam.
But it also shows that phishing happens outside of your products, you have to be kind of conscious and aware of that.
Fun.
So the big question is, what can we, as designers and developers do about it? So phishing is extremely hard to counter, and I don't think that there's any single solution to fix it right now.
In case you've heard of anything, please let me know. But this is the example that I've brought today. So unfortunately, the only interesting example we found, the system for fighting phishing is this.
It's called a site authentication image.
And the idea was to show you, there's a personalised, habituated message when you sign in.
Unfortunately, studies showed that many users will provide their credentials even if they've been warned. Even when their personal indicators aren't there. So they're basically, again, bad at noticing the absence of warning signs.
Hold on.
Oh, I like it when my audio works.
(sad trombone music) OK, so I thought very hard and long, long and hard about leaving this example out since we don't have a nice solution to share. But I decided to leave it in because I'm hoping that someone in this audience will find an elegant way of addressing the problem.
Or maybe tell me something that I'm not aware of. You're all responsible now.
OK, so it's about time for me to wrap things up. The first thing I want to leave you with, though, is know your audience.
It's for problems like these, and especially ones like phishing, that we all need to do better when it comes to articulating the reasons why people behave the way they do.
Summing it up, memory is a limited resource, so become more flexible.
Attention has a focus, so make sure your security information is timely and meaningful. And lastly, making decisions has a cost, so offer your opinion.
Thank you.
(applause) - Thank you so much, Rachel.
We might have time, I think we have time for a question or two, an observation just before we break and have a drink, and I think it really is an important and challenging issue, and I, as Rachel observes, there probably are no absolute solutions, but I guess we could all do better. So have you got any thoughts or questions for Rachel? There are quite a few I can see, so I'll try to compare and contrast.
Kevin, that makes your life easy, Mel.
- Rachel, did they, um, did the case of the Financial Times phishing with the long domain name being hidden on all the browsers, did that lead directly to the way mobile browsers have changed how they display long domains, or is it just an aggregate of cases like that? - I would guess, though I don't, oh, is my mic on? Yes, good.
I would say, and I don't know I don't have direct experience with that piece, but I would say that it's likely the aggregate experience because I think it's something we're still concerned about.
And in particular it's a problem because a lot of users don't understand URLs, which is another place where, uh, user experience design can help, so. - And URLs were never designed for humans in the first place, as well, which is interesting, and you know, I guess some mobile browsers are moving towards hiding them completely, we just get the domain, not mentioning any names, but. I guess those sort of decisions might seem, you know, which are also made from a user experience aspect, trying all that, technical details, try and hide that from users, but it can also erase important information, so, I mean, do these kind of get debated internally with the Chrome team, you know, that when someone you know, raises a feature, the security folks are, well, well? - Yeah, and the security team in Chrome, it's all right, I'm not gonna preach too much about Chrome, but the security team at Chrome is extremely dedicated and extremely badass.
We're being videotaped, I'm gonna have to go with Dominic. Dominic is my favourite security engineer, he's really, really dedicated.
You know the incognito icon with the glasses and the hat? He has that hat.
(laughter) Who does everything over TLS on their website? Yeah, you really gotta turn it on now, people. It's not that hard anymore, all right.
- Well, that's the thing, I don't blame them because the user experience for getting yourself into TLS is really poor.
- Oh, it was, it's got a lot easier, and a lot cheaper. But the cheap part-- - Yeah, I think you can do it for free now. - Lots are free, but the user experience has been horrendous, awful lot beeter.
Anybody who's got any sort of commercial budget should just be TLS, I'm terribly sorry, I think that is a bit of a given, because it does impact a lot of, you know what's gonna happen, of course, is that another part of Google unrelated in any way to the Chrome team, i.e. the app (mumbles). You know, the same way they made Mobile-geddon, remember we had the mobile-pocalypse recently, where all of a sudden everyone's responsive, right? Cause you can get a better search engine.
I suspect the same thing's gonna happen over TLS if it hasn't already, has it already? But basically, all things being equal, TLS will get preference over anything else, I, you know, Google kinda does encourage certain practises in that way, so I suspect that might be coming. If not, just in a way that's not deceitful, try and convince the higher-ups, it might happen that way, and maybe we should be investigating doing TLS, but it's certainly something we can do that will address a lot of these things.
At least on our own web sites, right, because who's currently connected to the network here? Who jumped on the free network? None of you are afraid of man-in-the-middle attacks? I mean I would trust us in Melbourne without a doubt, right? But it's just a user pattern, we're all just, ah, jump on the free wi-fi (mumbles).
- It would be amazing to do a free, a demo at one of your conferences, like change the website the designers go to. - [John] Troy Hunt did that.
Who was here a couple years back at Coe where Troy Hunt did that one? Right, he actually set up his pineapple, it's just trivially easy to do.
You'd be terrified, all right.
We've got a video of that, we'll send it around. Josh, you had a question back there? Hang on, we'll send a mic up, so.
Sorry? No, we'll wait for the mic so we get it for posterity. I know you've got a very loud voice, Josh, but. Josh is always the very best-dressed person at our conferences, it's about game, people. - I'm willing to be out-dressed, just give it a go. Uh, how long in terms of years or generations do you think this will just become habit to both users and designers, like locks for cars and things like we always lock our cars before we leave them, we lock our doors to our houses, etc. - [Rachel] So by "this" do you mean engaging in more secure practises? - [Josh] Yes, yes, that's what I included in "this". - So, I think, this is a Rachel opinion, not a Google opinion.
I think that we would be lucky to see the trend be in people improving rather than going down. The reason I think that is because I've spent a lot of time working on products for emerging markets, and I think about, how do people who've never used a desktop computer before, who may be navigating the web in English which is their second or maybe third language, how do they interact with the URL? And often, the answer is not at all, or not very intuitively.
And I think that now we're seeing, as you mentioned, John, that there are a lot of trends towards sort of hiding the URL even, even, wait, that's out, yes? Even, you know, Chrome's providing the custom tabs and people are kind of tucking everything away and I think the reasons for that are clear, right? People don't understand the URL, and it's kind of ugly, and why was it even there anyway? But unfortunately the URL in most cases is the only way that we can tell we're being phished. So, I think it's a (laughs) like much of my talk, I don't have great answer, but I think that, you know, as we sort of design our products, we need to find ways to kind of design that learning into the product in a way that is sort of form follows function, that we show people how to use it rather than simply tell them.
Because I think that's the only way we're gonna see a positive trend.
- All right, maybe time, one more, just over here, Mel. Oh, OK, I'll give you both, one over there, one over there. It won't, Mel's heading over there, Can I make a user experience suggestion or a UI suggestion? So that proceed anyway button, can we call it "I feel lucky"? (Rachel laughs) And we can reuse, or maybe "do you feel lucky, punk, well do you? You know, really play up the fact that maybe they shouldn't be doing it.
- [Rachel] Yeah, I think, I mean, that's about, definitely. Showing a clear opinion, I think in all of our products, we can think about the way we want to message that. You know, like maybe it is really strong-- - You know, I'll proceed anyway, you know, rather than like, really? - Are you serious, come on, man? This is a bank website? I'm not cool with it.
- [John] All right, in the back.
- [Voiceover] Yeah, actually, on that point, - [Rachel] Can you please just wave your arms? I'm sorry, yeah, there you are, OK, good, go. - On the, yeah, the SSL proceed anyway for non-secures, for those figures, did you, did the data consider local connections? So, for example, you might connect over SSL to connect to the local nest or local internal websites, those regularly don't have valid certificates, so users are getting programmed to proceed on those anyway. So then when they come out to the real scary world, they're going through anyway.
Has Google considered that at all? - Thank you, you know your job is good, this is not the first time that I've gotten that question. Um, yeah, so it's something that I've actually, I think it's something that's important to consider. A lot of our users are, especially, I mean, Chrome, a lot of our users are very technical, bad things are happening, sorry.
So it's something that we've considered.
The information has gone back to the mother ship. And I think it's something that we'd like to address. Though I have no knowledge of how or when that might happen. - [John] And one more, right over here.
Thanks, Mel.
Thank you, just work off those zero calorie donuts. - [Rachel] There are donuts, I didn't see the donuts. - [John] No, she got her own.
What do they call these things? - [Rachel] Someone was telling me about donuts in Melbourne. - [John] Donut Time.
- [Rachel] Donut Time.
- [John] It's like Suntory Time, it's Donut Time. - [Rachel] I think we're going to Donut Time after dinner time.
- [John] Yeah, cause we need more sugar right now. All right, last question for today.
- Um, I was just wondering if there was more information in the data and trends to do with security like, due to age brackets or socioeconomic data in terms of like, our millennials more likely to just sort of bypass security warnings, or you know-- - [Rachel] Are old people worse at it (drowned out)? - Yeah, is there anything in the data to actually say that there is a trend there? - I actually don't know, I haven't looked at that data. Um, but I have done a lot of interviews with a lot of people.
I would not say that.
So, from my entirely anecdotal experience interviewing a lot of younger people and a lot of older people and a lot of people also from emerging markets, I would say that young people don't always necessarily know better.
That said, I think, John, you have-- - [John] They actually think they know better. - Yeah, that's really a problem.
Actually, it really is a problem.
And I think, and you have an anecdote which I'm gonna have you share, John, but I think that one of the problems that I would, again, totally anecdotal, I would expect to see is that a lot of older people might not be aware of some of the risks.
Not that any younger people would be much more aware, but I think that we're more plugged in to the technology. This is becoming extremely anecdotal, I'm sorry. But I think it is, to some extent, true.
Oh, I want you to share your relative story. - Uh, about my mother-in-law, yes.
So I have a very educated and intelligent mother-in-law who's a little bit older, you know, as many mother-in-laws tend to be.
And she, she had a multi-factor attack on her which started with compromising her Hotmail account. And then some months later, obviously that information was used to almost get access to her bank account.
And at that point, obviously, you know, they had a pretty good prime suspect here in terms of access to somebody they could really compromise. So, she got a phone call, and of course, what have banks trained us to do, hey, this is X from bank, anyone from a bank? Don't do this.
From Bank Y, I'd just like you to answer these questions that identify yourself.
Right, seriously, they do that, they have trained us, to essentially cough up to anyone who rings them who claims to be from that bank.
I had this happen with a major credit card provider only a year ago, so it hasn't gone away.
Now, of course, she had two-factor authentication turned on, so it's safe, right? No, what they did was they convinced her that the messages being sent to her mobile phone with those codes were there to ensure that she was who she said she was when she was talking to them.
Right, so at this point they've now compromised, a very intelligent, although perhaps, and certainly a woman who's used technology for over a quarter of a century.
So, I don't think, even then, you've got these even when we turn 2FA on and so on, I think there are still gonna be factors.
On top of that, the bank then allowed an incredibly spurious transaction that matched none of her profile.
On an order of magnitude more money than she'd ever sent to a person she'd never send it to.
Like, again, I think, security doesn't stop at the front door, right? To my mind, once, you know, internally within our systems, we verify that the kinds of behaviour someone's purportedly doing within our systems match what we might expect of them as well. OK, we can get some false positives out of that, but when we're talking about a transfer of very large sums of money that had nothing like the kind of transfer they'd ever.
And how many people have travelled overseas for a very small amount, been, you know, stopped by your bank and said, oh, you know, this looks spurious, right, so.
So I think this is a full-stack problem, right, this is an end-to-end problem.
And don't even start me on kids who use their parents devices, because I think that's an attack vector there. - [Rachel] Oh, I thought
we were gonna start about, uh, security questions, is the other thing we shouldn't talk about. - [John] Security questions. - [Rachel] Security questions.
- [John] Yeah, yeah, yeah. - Mother's maiden name.
- [John] Yeah, mother's maiden, (mumbles) I've learned a really good hack for that, especially say, Apple still requires you to give a security answer to a security question. Use your password generator to generate a password and put it in there, and remember it in your password. Because if you're required, to, like that is the weakest link, your mother's maiden name is effectively of similar value as a 28-character long password generated by, you know, some crypto-algorithm that, you know, Gauss would have trouble of understanding.
You know, it's, the weak links are everywhere in this. And to be quite, it keeps coming back to, I think it's a design problem, and I think ultimately it comes down to, we just don't try hard enough. Right, so you can see, I'm reasonably passionate about it, and not entirely because of that experience of a very close family member, but, I think, you know, it's time for us to do better on this.
So hopefully Rachel's shared some thoughts around that, and hopefully, more importantly, we've started that conversation, I think it doesn't belong anywhere. This is the problem with I see with performance and a lot of other things.
We sort of just shoot it home to one part of the organisation rather than seeing it as being really right across the organisation to understand, you know, at senior decision-making levels. I mean, why has no one in the bank turned around and said, this has gotta stop now, we are not gonna ring up people and ask them, to challenge them with questions to give back to us.
Someone just has to own that and make it happen. So, um, anyway, enough ranting from me.
- [Rachel] Thanks for having me, thanks for listening, guys. - Thanks, once again to Rachel, for a fantastic, stimulating presentation. (applause)