Designing Secure Experiences

When a user opens Facebook, he wants to post a picture. When she logs into her bank, she wants to see her balance. For our users, security is not front of mind. If it gets in their way – they’re likely to look for a shortcut or skip it entirely. And yet, we consistently push security decisions to users, ranging from passwords to security warnings, usually resulting in an experience that’s neither usable nor secure.

In this talk (written in partnership with Guy Podjarny), Rachel will show examples that aspire to solve the problem, share best practices, and discuss how to provide a secure experience that doesn’t alienate users.

slides []

There is tension on the web between security and usability. Everyone agrees security is important now more than ever, and we are exposed to instances where poor security has led to disastrous result. Yet we consistently leave the responsibility for security decisions to the user, letting them choose weak passwords, store them unsafely and expose them to third parties.


At the same time, users simply want to access content on the web, whether it’s posting a picture to social media or checking bank balances, without endless obtsacles getting in their way. They are aware of the need for security but they don’t want it to get in their way.


With a bit of insight, empathy and technical knowhow, it should be possible to provide a secure experience that doesn’t alienate users.


We think of security attacks as being complex and sophisticated, but in reality breaches are often the result of asking users to supply their passwords or working out insecure passwords.


Passwords are hard. People forget them, write them down, make them easy to crack. While each password’s requirements (unique username, unique password that meets policy requirements and is memorable) doesn’t seem that hard, the number of accounts requiring passwords complicates things.


In his web comic xkcd, Randall Munroe said, “We use passwords that are hard for humans to remember and easy for computers to guess”. He’s right and we need to address that.

Password managers are good – although not flawless – but we can’t control whether our users have them and use them. Biometric authentication is good, but not always appropriate and it’s not perfect, either. Tokens are an option but can be complicated to use and anything that complicates things for the user is less likely to succeed.


Be flexible. An example of a different approach to passwords is that used by Medium. When you log in, you enter your email address. They then send you an email with a link to click that takes you back to the site – logged in. They are relying on the security of your email account to identify you and let you log in. For that reason, it’s advisable that your email account use two factor authentication – don’t be TOO flexible.


Be timely and meaningful. We need to present clear warnings to users about security risks. Putting an icon that warns about an insecure connection in the address bar is not going to convey a sense of risk, even if the user does notice it. Their attention is focused on what they want to do, so your warning needs to be able to redirect their attention at the right time and with the right information.


Offer an opinion. Chrome used an interstitial warning to give users a verbose and dense warning about security risks immediately followed by a link saying “proceed anyway”. 63% of people proceeded anyway. They redesigned the warning to emphasise a big, blue button that said “Back to safety” with a much less prominent “Advanced” text link that had to be expanded to let the user continue. The rate of people continuing fell to 38% – still too many, but an indication that the wording and presentation of warnings make a difference to user behaviour.


Know your audience. The Monkey Business Illusion illustrates how people can miss things that are right in front of them, unless they are told specifically what to watch out for. The reality is that users are bad at noticing things, even when it puts them at risk. Make it easy for them.

While we might focus on security policy for our own products, we need to remember that what we demand of our users – or let them do – is part of their overall online ecosystem, and how well we fit into that will affect however successful their security practices are.


Security policies and practices have to take into account the limitations of human memory, attention span, cognitive load and the context of learned behaviour.


We need to be aware that even if we convince a user to set a secure password, it is entirely possible they’ll use that password for other products and services, seriously undermining its effectiveness.


It’s good to use a message like “This connection is not secure” but perhaps you should add “so don’t use it to transmit information you want to keep secure.” It might seem obvious to you with your insight, but less so to a user.