Design for Security
(upbeat whimsical music) - Hi, okay, let's see.
Hopefully not too many stories about that and more stories about security.
Woo, yeah! (laughs) Kia ora everyone, I'm Serena, and I am a product designer. But specifically I care about the intersection between information security and design.
See, I grew up on the early web back when websites looked like this.
And I hung out with a lot of people who did a lot of this. Allegedly, allegedly.
So I grew up with a pretty clear understanding that security is important, but as I'm working in the world of design, I realise that security was really nowhere to be seen. And in fact I was really happy this morning, in Gretchen's keynote, when she mentioned that security has to be usable and we have to start thinking about it.
I was like "woo!" in the back.
Now I don't know why us designers don't want to hang out with the security crowd. I mean they seem like perfectly friendly, and normal, and nice.
Just very approachable people.
(laughs) Why is he wearing a gas mask? I have so many questions.
Anyway, regardless of how security people are portrayed in Hollywood, or stock images, or at your local Harvey Norman, I think everyone here can agree on one thing. And that's that the internet owns our lives now. All of our lives.
From our friends to our family to our most inane thoughts, our work, our hobbies, even our sleep is on some app or some website.
It's on someone's computer, somewhere.
And just like how cars come with locks and blenders come with lids, we need to think about how we protect our users. Cool, that sounds good, right? Just one problem: no one cares about security.
Sure, we all care about it in theory, but at any given time, that's not the number one thing we're thinking about. It's kind of like accessibility and performance in a way. We don't really think about it much, but it sits at the foundation of good design. And if we don't get our foundations right, then everything else collapses.
I guess what I'm trying to say is, good security is a part of good design.
The two are inseparable.
Now, this goes against the common narrative. There's the usual pervasive assumption that security and usability are mutually exclusive. That to make something more secure inherently makes it harder to use, and to make something easy to use inherently makes it less secure.
But this can't be further from the truth.
And in fact, the opposite is true.
If a system is secure, then it allows the user to complete their tasks without exposing themselves to undue risk.
And that necessarily requires usability.
Our two fields are inherently interlinked, even though sometimes it feels like we're working against each other.
So maybe it's okay that no one cares about security, because they shouldn't have to, we should.
It's our job to build accessible, performant, secure experiences for everyone whether we're security experts, or developers, product managers, designers, it doesn't matter, because it's our job to care.
Sometimes I feel like in our industry, we get so caught up in making experiences disruptive and delightful that we forget that sometimes all people wanna do is send their friends a fricking message without being spied on by the Mossad or something. It's our job to care.
And I'm not saying we all have to become security experts overnight, but as our world becomes more and more digitised, we need to start thinking about the ways that technology can be exploited.
We need to start thinking like hackers.
Learning about security makes us better designers, developers, and product people. So let's do this.
Today we're gonna go through four security considerations from a design point of view.
And hopefully I'll be able to show you that designers and developers and product people, we have a lot more influence over security outcomes than we think.
One of the main ways that we can influence security is through paths of least resistance.
So in security we are used to putting up a lot of walls. Like, "Oh, sorry, did you wanna do a thing? "Have you considered not doing that thing? "Oh, you need to make a connection for your job? "Have you considered getting another job?" The internet, oh, that was a huge mistake.
We just need to shut it all down.
If only.
Most of our security intuitions are from decades ago. We still think hard-to-use things are more secure. But they're not more secure, they're just hard to use.
Like, how many of you have ever worked in an office where you have to change your password every six months or something? Oh yeah, you all know what happens, right? You write your password on a sticky note and you stick it to your computer.
Bad or naive security, which is, let's be honest, most of the security we see right now, is just putting up walls in front of people where you don't want them to be.
But as it turns out, if we put obstacle courses inside our products, inside our apps, or even our organisations, people just get really good at obstacle courses. Remember these screens? What happens when you come across one? "Ah crap, connection not private, okay, okay, "but I really wanna look at memes, so.
"That's okay, 'cause I know how to internet, "I am a grown adult, and I know all I have to do "is click here and proceed anyway." It's natural for humans to be economical with our physical and mental resources.
Security isn't what we're thinking about all the time. We're thinking about our task at hand.
So, if we want to design secure systems then our primary consideration should be to align our goals with our users' goals. When these goals become misaligned, people will start subverting your security measures, both unintentionally and even intentionally. So if naive security is walls everywhere, then good security, design security, is smartly placed doors.
We can build doors, not walls.
When we focus on safe passage, we can direct the user in such a way so that the path of least resistance matches the path of most security.
And this is what people mean when they say "secure by default".
That's just the trivial path of least resistance; what happens when I do nothing.
And this can be as easy as defaulting to the safest option in a choice, or if you want a physical analogy, it's like those blenders with motors that don't turn on unless the lid is on.
And these paths through your app, they can be crafted and designed.
Instead of presenting security as some super special power user feature, you can normalise it and make it a natural part of the process.
For example, do you need someone's phone number for their 2FA? Instead of burying it four levels deep into your settings, you can make it a natural part of the sign-on process. Look at your journey maps.
What are the paths that people take through your product right now? What's the easiest path? What's the most secure? Can you make them the same thing? Can you align your goals with your end users' goals and build doors, not walls.
Okay, let's say we wanna do that.
Well how do we do that? If we want to know our end users' goals then we need to know their intent.
Which brings us to the next one.
What is everyone's intent? Having a clear view of what everyone's goals are set the foundations for how we approach problems later on. This sounds like common sense but forgetting about intent is exactly what causes this dichotomy, this tension between usability and security in the first place. This tension happens when we cannot accurately determine user intent.
Take this screen.
Are you trying to access something dangerously over the internet? Or are you on some kind of local or enterprise network connection with a self-signed certificate? Who knows? We forget about intent all the time because it's really hard, and so instead of thinking about what everyone's goals are, what everyone's desired outcomes are, we tend to fall back on our patterns, like, "Oh hey, I'm a designer, I'm responsible for usability, "and so I need everything to be super easy, "and security is making things hard, so whatever." But, and to all the designers in the room, I want you to really let this sink in.
It is not our job to make everything easy.
That's not our job.
Okay, so what is our job? Well our job is to make legitimate actions, that legitimate users want to take, at that time, in that place, easy.
Everything else we can lock down.
So if we want to tighten security without sacrificing usability, it just requires that we get more specific about the user intent.
This is easier said than done of course, but even taking a rough first step can drastically improve the usability and security of your systems.
What do you know about their time and location? Is it usual to be transferring money at 4:00 AM in the morning? Are they showing up in Russia after they've just logged in from Australia five minutes ago? What are their roles and routines? What are the usual things you'd expect someone of their role to do? And what is unusual? And what is the bare minimum amount of personal data you can use to infer these? For example, when I came over here from New Zealand, I had to let my bank know that I was gonna be travelling. Because they have a behavioural model that says someone's probably not going to make a payment in the country that they're not in.
That's all you need, really simple.
So let's say we have a better idea of user intent now. How do we communicate to that intent? There's no shortage of talks about how important communication is.
But in this talk I want you to start thinking about communication differently. In fact I want you to start thinking about miscommunication as a human security vulnerability.
This is the stuff you can't solve with software updates. So here's a question: what are you unintentionally miscommunicating? In May last year, Chrome announced that they were going to phase out the green secure padlocks from their URL bar. Why do you reckon that is? Well can anyone tell me what the green secure lock means? You can yell it out if you want.
Yeah, SSL, exactly.
It means that your connection is encrypted and that the domain is who they say they are. Woo, yeah, we're at a tech conference, we know our stuff.
But to your average everyday person, what d'you reckon they think this means? They think everything is safe and secure 'cause it says "secure", right? But that's not necessarily true, and so this is a miscommunication and therefore a human security vulnerability. So let's say I got bored one night and decided to do some crime.
I can go to any domain register and register a legit-sounding web address.
I can then go and get a free SSL certificate, thank you very much, and with my super hacking skills in HTML and CSS I could set up a pretty convincing phising website that people think is secure.
(audience members laugh) Don't do crime, people.
No crimes were committed in the making of this talk. (Serena laughs) I initially did this with ANZ and I thought, hmm, maybe I shouldn't.
Don't wanna be arrested in Australia.
I didn't actually do this, but, this is what phishers do all the time, and it works really really well.
So as it turns out, that green secure lock is a human security vulnerability and that's why it's going away.
The point stands.
Do your users know what you're trying to communicate? What is their mental model of what's going on, compared to yours? Which brings us to our final consideration, mental models. So as designers, we have this idea of how our product should work.
And then we work with developers and our product managers and we make the thing, and then that app or product or website gets consumed by the user, and then they develop a mental model of what's going on, cool.
When it comes to security stuff, we tend to get trapped in the technical details of what's going on in the system here.
What are the zero days, what are the exploits, what are the bits of code that are vulnerable. But this by itself is not what makes the system secure. What makes a system secure is this: it's when user expectations match both the system image and our design intentions.
Think about it.
A man in the middle attack, when someone's listening in on your communication, that's not inherently insecure.
Telephone is just a string of man in the middle attacks, and that's not a security incident, it's just a pointless children game that tells you which one of your friends is a fricking liar. (audience member laugh) Brian.
(Serena laughs) Man in the middle is bad in most of our cases, because users expect their data to go to an expected party.
And the person in the middle is unexpected. So here's a definition for you.
A system is secure when the user's expectations match our design intentions and the system itself. Therefore, if we want to build secure products, we need to understand our end user expectations and we can do this in one of two ways.
We can seek to understand their model.
This can be as simple as going along to your customer sessions, if your work runs them, or observing everyday people, non-tech people, using tech. Ask them about their security expectations, what do they think good security means, and how do we bridge that gap? And you can always infer, you can always try and infer their intent through contextual clues, like the time, the date, and the place.
The second way to influence their model, the second way to do this is to influence their model. And the thing to remember in this case is that whenever we make something, we're teaching people, and whenever people use the stuff we make, they learn. The path of least resistance in our apps often becomes the default way to do something, and these patterns become entrenched in your user base. So here's another question: how are we already influencing our users' models? Who here uses Apple products? Yeah, um, do you remember when iTunes would randomly pop up a password sign-in thing for no reason whatsoever, at the most random times? It was just annoying and fine, I guess, but the one thing that needs to be considered is that every time they pop up a little dialogue asking you to sign in, they're training us as their users to blindly enter in our passwords whenever we see that dialogue.
So here are some screenshots: the one on the left is from iOS, the one on the right is a phishing attack.
Pixel for pixel they are exactly the same.
So what are we teaching? Are we teaching people to ignore warnings? Are we teaching people that security is an obstruction? It's all about our users' mental models.
"Hey, I have a thing, is it secure?" Well it's meaningless to ask that without first finding who it's secure for and in what context is it secure.
So what are your users' mental models? Seek to understand their model and speak to that, rather than the detailed ins and outs of what you already know.
Alright, we're at the home stretch.
If you forget everything from this talk, here are some things that I hope you do not forget. The first thing is that cross pollination between security and even product managers and developers and designers, is really rare.
And it's a huge missed opportunity.
And we should all be friends and we should all talk to each other more.
Our jobs are about outcomes, not just what we're supposed to do in our roles. Our job is not to make everything easy.
And at the end of the day we want to align the user goals with our goals and we can do that by aiming to know their intent, by crafting a path of least resistance, by understanding their mental model, and by communicating accurately to that model. Before I run off the stage I have one final anecdote to share.
In one of the old buildings in the office I used to work in, there was this light switch next to the door and it had a Post-it note over it that just said, "No!" Can you guess what the first thing I did was? (giggles) Build doors, not walls.
Thanks.
(audience applauding) (upbeat whimsical music)