(upbeat tempo) - Thank you, John and team, for having me here. I feel really lucky and fortunate to be here. I'm looking forward to the rest of the day and to also tomorrow.
So, in the last few months, as I was putting this talk together, I realised that the story I really wanted to tell is actually quite a personal one, why I myself identify far more often and more truthfully as a researcher rather than a designer, despite the fact that I've probably spent half my career so far in the design side of things.
And why and how I see research as a vital part of Design Toolset that we rarely call upon to use and, you know, use to its full power and potential to make positive change, not in the sense of just looking at customer needs, when I talk about research, I also mean about the power of building evidence, of using it to make good decisions, especially ethical ones.
So, this isn't going to be a talk about how you do research necessarily, but really about the why, the context of the research, and crucially, the when. So, a little bit about me, just so that you have a context of where I come from, my bio.
So, I came from a computer science background, and at the time, in the early '90s, we were still trained to talk in terms of code efficiency.
We were taught to write code with a minimum number of computing cycles because it was just cheaper.
We would look at algorithms in terms of how many, well, in terms of complexities, that's the proper term that we were using, how many function calls, how many nested loops, details like that, whether it was more cost-effective to save things to memory or not.
And everything had a cost associated with it, and the goal was to make robust systems that not only didn't fall over but also didn't cost a planet, and it maxed the performance, it maxed the speed of execution, and the amount of electricity that you consume. So, hello, Bitcoin. (laughs) (laughter) We didn't have the computing power or that limitless storage that we have now, so those things were important, and we live in a very different world today. We have an excess of computing power, and we will have more. We have a lot of data, and we will want more data. So, when I think, when it comes to having a strategic impact, it's not always about being more.
It's about having choice and making the choice, making informed decisions, charting a clear path to success, but more importantly, when success doesn't come in the way that we think it's going to, as very much leading to what Sarah's been talking about this morning, whether you have a map for navigating the failure. Failure stories, I've got a couple.
I want to take you back to 2008, which is 10 years ago, and to give you an idea of where we're in time, I was trying to figure this out.
So, Twitter was 2006, five, six? Facebook certainly was open to the public 2006, so it was only two years after that.
iPhone, the first one, was announced in January just a year before, in 2007, and only released sort of like June time, summertime, later in that year.
So, if you know about the Lean Startup methodology, that book wasn't actually published yet, it was being worked on, and it was actually published three years later, in September 2011.
And I'm sure you know about Ethan Marcotte's article on responsive web design, and that seminal article, that was published May 25th, 2010. So, that was how long 10 years ago was.
So, I'll show you some visuals from that point and you'll be a bit dated and you can laugh at it, it's fine.
At the time, I was living and working in a city called Montreal.
This is a snowy version of it; in the summer it's really hot.
And my life took a different turn because I published a blog post about loving books, paper books as well as, you know, my first experience trying to read a digital book on the iPhone. That was one of my original photos back in the time where I actually had time to write blog posts. And to the right was a cosy little Japanese restaurant that apparently is physically still there, but spiritually is changed to another place now, and where my then co-founder and I met for lunch and hatched some plans.
Hugh McGuire is a very brilliant man and he founded, amongst other things, LibriVox, crowdsourced audio books, and the problem that he came to me that he wanted solving was a problem that anyone who has written a book will be familiar with.
This whole process of writing and editing, when you're done, you know, you can do that a few times, get it right, you get someone to help you proofread it, you format, and you send it on to the ones where it's being published.
The platform that we wanted to build at the time, again, 2008, given that this is 10 years ago, we wanted to tackle that problem in the middle. It felt like a really well-defined problem where it was, you know, there were lots of moving parts, lots of people collaborating on the piece of text before we set it up, and it's a difficult problem to solve. Do you have any idea what it looked like, the timelines? In 2008, that was a thing.
The future stats were very basic at that point. It was very much about how we can upload the chapters, which we were primarily targeting novelists and self-published authors at that point, so, and also things like, oh, you can pass a comment, if you wanted, in a specific text, you can annotate paragraphs.
There are things that you come to expect these days about what you can do, and there was this wild little thing that we decided on a whim to build.
It might have been my fault after a couple of beers. Proofreading is a really painful task, for the most part. Proofreading is that bit where you correct spelling mistakes, you correct punctuation, kind of really dull, and I thought what would be really interesting at the time was to sort of like split text up into singular sentences, show a little bit of context and crowdsource the dang thing.
So, this was bite-sized edits.
So, like all people who cared about user experience, we did some research.
We didn't have a lot of money, I mean, we did win the amount of money that we pitched for.
We found somebody with the contacts in the literary world, especially people who were interested in self-publishing, gave her a guideline of what I was looking for. She was a former journalist, so I trusted her to run interviews.
And so, it was quite informal, and to some degree, we snowballed it because we just didn't have that much money, all the money went to the actual development of the platform.
One of the things we found, interestingly enough, considering how lowbrow in terms of rigour this research was, was that people in traditional publishing totally didn't get what we were doing, absolutely no idea. But people who loved text and who loved tech, and amateur writers, did.
There was something compelling about it, so we were testing this platform on Gutenberg text, so there was something really weird about reading classics in random order.
But instead of thinking, at that time, that we ought to find out more, we just kept churning along, and I think, at the time, I didn't, we didn't, understand what the context meant and how it meant everything.
So, you have, you know, you do the standard things. We compared ourselves with other platforms with similar features, but our main competitors were not those other platforms. Our main competitors were Google Docs and Microsoft Word. Literally about three weeks ago, when I was at a kind of meet up with writers in London, these were sci-fi writers, I was asking, you know, I guess I'm a researcher, so I'm just gonna ask a random person, so, tell me, what's your writing process, what do you write in? It's like, oh, I write my draught in Microsoft Word. That is still the case.
And so, our greatest barriers were entrenched habits that the writers in the publishing industry have, and still have.
Just recently, I went through a really painful process as a development editor for a book.
The author wrote it in Markdown.
He actually imported into Google Docs, where we commented, worked on it several times, and then I had to manually translate the draught into Microsoft Word templates. Currently, myself as an author, I am struggling with this as well, Dropbox, Microsoft Word, ah.
So, like, you know, like what Sarah said, to some degree, this is not necessarily as massive a problem in terms of where it lies in terms of human problems, but we really didn't fully understand the human circumstances.
And this is a question, I know there are a few startups here 'cause I met some of you yesterday, if you are making something that doesn't exist yet, how would people know if they want it? How many of you, I'm sorry, I know there are some startups here, is that a question that you feel you have? Yeah.
Yeah, I know there are, yeah, so.
So, for the longest time, this was what we kept thinking, but then I realised this was the wrong way around to look at the world.
With this kind of question comes a degree of arrogance, that we think we're believing we're working on something new and novel.
And so, I think that was really the wrong way to form the question, and to some degree, everything that I, well, in that process that we did, we'd done nearly everything that one considers a best practise, and process.
But we were a solution looking for a problem, and ever since that project 10 years ago, I feel like I've been atoning for my sins.
Here's an existential question: is it bad design if nobody wants to use it? Oh.
And when I use the word "design", I just want to remind us that I mean capital D, Design, not just how it looks, but how it functions, how it feels, and how it makes us feel.
So, fast-forwarding 10 years later, unless you're Google, or Facebook, or Airbnb, or any other design-led organisation, research doesn't have the same influence, and it's quite common to see that, if there's design maturity in an organisation, you will get research maturity.
So, 10 years later, the company I was working for, that we were working for, wanted and needed a new and different revenue stream. We already had a subscription model, an advertising model, as well as other more minor streams of revenue. I assumed somebody did the maths.
They wanted us to launch a premium content service in record time.
I think our timeline was something like eight months. It was barely time to develop it let alone any time to do any research on it. But no one actually asked the hard question: if we decided to put a price tag on something, what would that price tag be in the value that someone would decide to pay for it? Because if the price tag's too high, you're locking something away that no one wants to pay for. And considering that's some of your revenue's ads, you know, what's the value of that, and what would people find valuable enough that they'd be willing to pay for? So, if we locked content away, it was just going to take away some of the ad revenue. So, it was one of those, you know, circumstances where it's like, how much is this worth that we should go ahead with it, but who will have given the command? Now, I've thrown out quite a few sort of business model words at you in a short amount of time, and I'm gonna say this more than once in this talk, as a UX professional, a design professional, if you want to have strategic impact, you must, you must understand and use the language of business, especially in the context of different organisations, companies and clients.
In our case, the ship has left the harbour, the chicken's flown the coop, pick your metaphor. We did manage to squeeze in some research of ours on a usable (mumbles) app and concepts, and we also did, more crucially, a round of research around the brand perception because I wasn't going to let this go.
This could hurt us.
I wanted to make sure how much it hurt, I wanted to know. And so, we did the round of research on the brand trust, trying to understand, what was the thing that people come to us for, why did they trust us, why did they believe what we say, and how did they perceive the value of what we bring to them? What was our value proposition anyway? And, how have we fulfilled that promise, and what did our customers believe that we're promising them? Are we fulfilling that well at all? And it was one of the rare occasions, even though it was an impossible project, where all across over the research disciplines, we collaborated well together.
The data science analytics teams collaborated with design research and market research. It was such a rare thing.
But whatever work we were able to do, whatever insights we were able to come up with, it was only to mitigate the potential disaster. The problem is, and it's nearly always the root cause, so many times, when people like you and me get involved in a project, the decision of this stuff we're making has already been decided for us.
And it's usually someone's top-down idea, or there are too many competing arguers at a level, and someone won a political battle and now they have to deliver it in record time. The problem is never necessarily about whether we might have been building things right or building the right thing.
The cause of the problem tends to be the hunger for somebody to be a genius. (laughter)
Yeah, I don't know, I'm hearing laughter, so I'm guessing that you recognise this to some degree. The genius syndrome, I sometimes call it, and so, I want to clarify.
Sometimes it's a person.
Alright, sometimes it might be one lone person, but sometimes it's also a person within a particular context.
It could be a role, or it could even be a team. It could be an entire culture within an organisation, but we need to recognise it for what it is. And without earlier research to truly understand the potential context of views, any idea seems like a brilliant idea until you execute and launch it, by which time it's too late to tell the idea was a bad one or if the execution isn't quite right.
And when you don't have the reflex to do the research the moment the idea pops into existence, you are setting yourself up for high risks. Remember what I said earlier in this talk, it's crucial when you bring evidence to the table? The challenge is, by the time someone's convinced that the idea would work, especially someone with power, that we've got something to prove, our usual habit, then, is to use research or use design to validate something, validate the idea rather than to question it. And it's not our usual habit to think about who our end-users really are because we're only trying to target the people we're trying to make money off, especially given the examples that Sarah gave this morning. We want to be very careful about choosing people to see and test the stuff we build, and, you know, people who use us, and also people who don't want to use us.
I have so many examples of things like this, built in just like 20 years of working for the Web. Things I have made, I'm guilty.
Things my teams have made, we're guilty.
Stories, you know, by other people like us. And this is one of the things that I've realised most recently.
What we have done is we have given ourselves the permission to launch the things with limited evidence that they were even going to work, or that they would work in a very narrowly defined context, or if they don't work at all, as we've seen in some examples from Sarah this morning, and make no impact despite what passed for evidence, probably because they were done down the hall or we decide to just go out to the street and ask any random stranger if this is cool or not cool. For someone like me, who started out my career counting the number of times a loop runs, this idea just seems like so much waste.
Think about all the effort that goes into building something, all the emotional investment that you make, all the time or the money, like now.
How many hours does it take to build a thing that you take down in three hours? Building the wrong thing is incredibly wasteful.
So, for many long years, we had terrible processes. We still have them, some of them.
We had, and still have, waterfall processes where, with each step, not, you know, counts of an increased risk but, sort of, if a design spec wasn't correct or quite right in the beginning, it would likely have consequences later on at a much greater cost.
And so, we decided that somehow, if we lean back on, oh, sorry, no pun intended, (laughter) Lean and Agile processes, it would, in theory, in theory, be able to, of course, correct.
And so, when we were building things with minimal evidence, and it's not just about launching things that are imperfect, we're now building them faster.
We're obsessed with building things faster. "Build faster, fail faster", "Move fast and break things", those kinds of phrases is what we have in the industry as a mantra, and in my professional life, I've encountered, just in the last month, twice, two clients, separately, different things, came to me and said, you know, we need to get this out the door because we expect to make revenue on it, before the product is ready.
And this is what our Lean and Agile culture has been translated to, and "Build faster, fail faster", make some revenue as early as possible.
Are we guilty of having convinced people that we could do this? I wonder about that sometimes.
So now, we're now seduced by the idea that you can start making money with something that is less than finished. In one of these stories, someone got in touch with me recently with, again, a product they have to launch, and they built something with the flimsiest evidence of market research, they did one focus group.
They have a compelling story but they're nowhere near being able to deliver the promise.
The thing is, I wonder if they stopped actually and thought about it in the first place because they were going to enter a very fierce, competitive space with the thing that they're building, and there might have been something different, a different opportunity that they missed, another way for them to fulfil the promise to the customers and the clients.
They could probably actually make sure they have a sustainable business model rather than just building this thing.
Making something as good as you can make it somehow is not now quite so important.
Because people are actually more interested in being seen building something, releasing products with imperfections is now our dominating mantra.
And there are consequences, that's what we've been talking about all day so far. There is something about this here too, we are seduced by the novelty of building our own. Everyone wants to be a hero techno entrepreneur, and I think the problem is everyone, secretly inside, wants to be Steve Jobs.
How did we get here? Whenever I think about this to myself and try not to, sort of, try to cheer myself up in some ways, I'm reminded of a play by an English playwright called Thomas Stoppard.
So, I'm assuming that all of you know Shakespeare, yes? And that all of you know Hamlet? Maybe not, like, word for word, but "To be, or not to be? "That is the question," blah-blah.
So, if you might remember your literature class from school or whatever, so, in the story of Hamlet, there are two characters who show up in the beginning, and they just, they basically show up, appear in a couple of scenes and get killed off by Hamlet, by being betrayed by him. He had them hanged.
And so, Thomas Stoppard is a playwright who wrote a play showing us what actually might have happened to these two characters.
And so, yeah, they were, in the film version of the play, which I'm gonna show you just sort of the, not the really last scene but nearly the last scene, Guildenstern has a line that struck with me. So, the characters names are Rosencrantz and Guildenstern. Blah, three syllables, it's really hard.
So, I'm gonna quickly show you this scene now, and this is the point where they're about to be hung. (bell tolling) (birds chirping) - That's how I feel. (laughs) (loud laughter) "There must have been a moment at the beginning "where we could have said "no".
"But somehow, we missed it.
"Well, we'll know better next time." I'm gonna go further before I come back, because given the spirit of what's happening in the world today, I feel like this is what happens when we don't pay attention to consequences. There are two things I want to bring up now, or maybe three. I can't remember, I lost count.
Things have gone very wrong lately with the technology that we have built, and there are some things that niggle at me. Again, maybe it's because I'm, like I said, I'm atoning for my sins.
If we had taken the time to understand the cultural context that we function in, then maybe, just maybe we might have been able to avoid some of the terrible stuff that's happened more recently.
This is an interview with Ezra Klein.
If you're a podcast addict, you might've come across him before.
He's one of the founders of Vox Media, a sort of centre life media group.
I don't even know, I think they're more laughing than that, Vox Media in the States.
And this conversation is with Jaron Lanier, a pioneer of virtual reality and someone who's kind of much like a philosopher in computing.
- I really appreciate that particular perspective. It helps me think about things differently, and it also helps me think that, in some cases, we need to start thinking about our craft and how we address this issue of using business models. You may also have heard of Tristan Harris.
He's one of the strongest proponents of ethics in Silicon Valley right now, and he discusses about how we're under siege by a lack of persuasive design patterns.
I'm not gonna play you anything here but I have a quote, a very brief one, from his interview with Wired magazine.
So, he says, "Advertisements themselves are not the problem. "The problem is the advertising model.
"The unbounded desire for more of your time. "More of your time means more money for me if I'm Facebook "or YouTube or Twitter.
"That is a perverse relationship." So, I rarely ever do this, but maybe because my other half is French and we often talk about having this thesis, antithesis-type arguments, so, for anything that you put forward across, trying to find something that's opposite to what you believe in.
And I think that, as a researcher, that's potentially also something that I live, the kinds of contradictions that we have.
This is too good to pass up, so it's very rare that I quote from the same source in the same talk, but a couple of weeks ago, Ezra Klein, the guy that we heard earlier interviewing Jaron Lanier, also talks to Zuckerberg, and for once, I put my prejudices on hold, and I just have to concede that he might actually have a point.
So, I'm gonna play this little clip here and just give you, so you can kind of, you know, fair hearing.
- I thought that was an interesting perspective just to add to the mix.
But the things that Sarah spoke about this morning, even with the best intentions at heart, without taking into account how people might abuse that path and system in what you're designing without looking at these stress cases, and the context, the very, very complex context of use, we're gearing our asses for disasters.
But, you know, I'm gonna be now, we have always done this, nearly always, try stuff out on people before the technology is ready. History's gonna provide some context.
Now, I know that I'm throwing a lot of media at you, but I've taken you back to 2008, 1980s, and now we're gonna take you back to the Edwardian period, sort of like the 1900s in Britain, if you like. Cast your mind back to the early days of electricity, in the early 1900s.
So, you can imagine, with what we've got now, you know, we've got like three-pronged plugs here in weird angles so that you can't get them around the wrong way.
Early 1900s, you only have naked, bare cables. One touch and you will die.
They haven't actually figured out how to cover the wires yet at that point in time, and they were trying all kinds of material. Originally they tried it with paper wrapped around wire, lead, a fantastic fire accelerant. (laughter)
They wrapped it in a cloth, which burns up really quickly. They wrapped it up in wood, which also burns quickly, and basically anything that might, you know, they thought that might stop electricity from getting through.
And earthing, that thing where you sort of drive the current onto the earth in order to make sure that, you know, it doesn't go into your body, that idea didn't exist yet.
I don't think we had the science for it.
And so, if you have wires running around your house, or if you have a small child, it's potentially tragic.
So, I'm going to show you just a couple of minutes of a BBC document feed by Dr. Suzannah Lipscomb, and this woman is a particularly interesting example for me of somebody who used to build stuff and who still builds stuff.
You know, when you're thinking you're trying to do good for society while making a bit of money on the side, giving them access to a clean, safer source of energy because, at the time, the other source of energy was gas, and that was, you know, dirty, muggy, yuck. And, so, this fascinates me because it's a classic case of how your users will use things differently than how you expected them to.
- We used to launch things without testing them. Well, let's not do that.
"There must have been a moment at the beginning "where we could have said "no".
"But somehow, we missed it.
"Well, we'll know better next time." And Peter Morville has been writing about this, and I know he's working, or has been working, on a book, on planning, the idea that, you know, "If engineers fail to plan, "bridges collapse and people die." And, "We are now learning the hard way "that the consequences of bad software are no less dire." And, for me, I go one step further.
How do you know what you need to plan for? We live in a pretty complex world of, you know, how people behave with the things we build. I would go a little further to say that in order to plan, you need evidence of how people are using things. It means we'd go a little slower than you want, and the question being, how can we, as designers, researchers, UX professional, get into that conversation early enough so that we can stop fires from happening, train wrecks from happening? One of the stories I told you earlier, I had three levels of management who have no idea of the types of evidence that we can bring in user research and what that can bring to the table.
But, the thing is, the timing of the research is key. It's like a muscle reflex, right, you need to have the reflex to sort of, at the top tier of management, to say, we need to find out more.
But how do we do that, how do we kill the genius? Part of me thinks this is by changing the culture of evidence gathering, and the type of evidence that we gather, I will go into that in far more detail towards the end, so that we can make the right decisions to focus on in the beginning.
And in that story that I told you a little earlier, you know, where we worked together, and even though it was really about mitigating disaster, I did manage, surprisingly, to get the attention of the core leadership, and they could have potentially seen what else we might have done.
And sometimes, we might have to do a thing wrong before we do a thing right, even though we just want to try and mitigate the damage, even if it's the wrong thing.
So, I know that most of you will probably be familiar with the Double Diamond, and this particular version, there are so many versions of it, this is a particular version that was formalised by the Design Council in the UK in 2005. I like to talk about understanding the problem space better before the solution space by designing the thing right.
And, so often, we start in the second part of the double-diamond and assume we know enough about all these things leading up to that point.
And most stories we can tell in our industry, we always build first rather than discover first. And so, when you adopt things like Lean and Agile as a process, you want to be mindful of the culture of where these frameworks come from.
History is important.
Agile comes from the engineering culture.
Engineers are wonderful people, I was one.
I hope I was wonderful.
But they are coming from the solution side of the diamond, and the idea would be, build the thing first and try it, right? But usually, bring in the cognitive psychologist, they will tell you that by the time you've built something, however light the win you have is small, you're going to be already attached to the idea, and imagine how ideas grow, like a tree.
Unless you have set in place, very, very early on, particularly options or what different paths you could have taken, if you only took one, you wouldn't really know where to go back to to try something different. And so, even the most rational of us would be seduced by the ownership of an idea, and so, we need to get better at positioning evidence gathering at the source. Nathan Waterhouse is the co-founder of OpenIDEO, and he's from IDEO, and this is something that he said that struck me in one of the talks he gave this year at a conference. "Don't try to be persuasive, let your users do the talking." So, the evidence is not coming from you, as an expert, but what the users might say, and that's why user evidence is far more powerful. My former colleague, Greg Winston, and myself had an interesting time establishing the research discipline at MailChimp. Now, you might've heard of Aarron Walter, who was a Director of UX there and now he's the VP of Design Education at InVisionApp, and he lived and breathed research.
You know, he would pick up the phone and talk to clients, and talk to customers directly himself.
So, my job, and Greg's job, was made a lot easier. Our methodology wasn't always strong but we had this notion of accumulating wisdom about our customers and about our users, and I find it interesting because, for all the things we talk about data, and for all the things we talk about being rigorous and being empirical, it's an interesting emotional approach to research, where you're just saying, okay, I'm trying to know my customer as well as I would know my friend; I would like to get to know them on what they like, what they don't like, but also not want to serve them for dinner. That kind of level of empathy is what we've always aimed for.
But research has always played a strong part there, not because, oh, well, I would say, primarily because Aarron was championing it, but also because Greg and I, very early on, established that we wanted to be where the action was. We wanted to make sure that whatever we find will be actionable and has, you know, the correct type of consequences.
And so, we would constantly check in with our C-level stakeholders, what's on your mind this month, what are you thinking of? What do you think you'd want to do in three months, what's on your mind right now? That's what we would do, you're in, you're out, in trying to figure out where to angle the research so that the research drives the right question and not the other way around.
This is Stanley Wood's quote.
He's now Director at Spotify, and actually, interestingly, there's an interview he did with Aarron recently. "This was an important lesson for me.
"I learned to sell the problem before the solution "to activate change.
"I'd always sold change by selling solutions "and then finding myself stuck debating and defending "why it was so much better than what existed..." "...The curse of being a designer is you often jump "into problem-solving mode.
"I now try my best to always ensure "there's a demand for the problem "before supplying any solutions." So, design in research can provide that different type of evidence to traditional market research and data science, where rich in context and nuance, the kind of stuff that has more in line with anthropology and social science than what we have traditionally done in labs. Anyway, here's an interesting thing.
One of the clients, this is semi-success story, a client, well, a then client, came to us on a project where they said, we built this thing; can you please test it to see if people like it and would use it? So, again, the solution was predefined, but we did something quite different.
So, let me give you a little bit of context. So, imagine, you know, like you've got Netflix or something, where, you know, you sign up, you put in your credit card information, and then, a month later, it starts charging you, that kinda thing? So, this particular product, they have a subscription model already but they were interested in short-circuiting it. The idea isn't new but the way they were doing it is, so I can't give too much away sadly, but just to give you an idea.
So, they wanted to give people a taste of what they would be paying for far earlier in the process, and then see what happens if they didn't pay. We came up, you see, it was not an easy value proposition to understand.
We came up with a very direct way of testing the boundaries. So, my co-researcher and I, we basically lugged cameras, recorders, even a portable printer, around London. We tested the words on people to see if they understood what they were getting.
And between the sessions, we designed the thing to see, no, just to test, we were basically testing the test all the time, and see what people responded to, continually refining our assumptions as we go. And it was interesting because the first set of assumptions, that died in the first hour, and so, we were making new ones and trying to test them really, really quickly, so, across the course of two weeks.
We tested with people in the street, but we also recruited people who didn't know the brand well, so that is where you have the range of behaviour, not just the assumption that everyone's gonna love what you're gonna make.
And having formulated things that didn't work, we'd test it with both sets of people so that there was some consistency between the two different groups.
I'm simplifying them a little bit.
There's actually a lot of nuances in these groups. One of the side effects of design research is that you're often being the bearer of bad news.
So, instead of doing that, what we actually did was, you know, we instead said, okay, you know, we know that's not going to work, but this is what might. And so, by just reframing how we approached that research, we were able to bring the questions way forward and make our client think about the space that one should play in.
There is something that gave me a lot of hope. So, people of all ages, or not all ages, admittedly we spoke to above 18s, they would only allow themselves to be scammed one time. Anyone who didn't know the brand was really sceptical of being offered something for free.
And, interestingly, what Suka Beck said just now, you know, actually, most people don't necessarily, in some cases, don't always want something for free, and we need to find out where those boundaries are. The majority of people we spoke to, and showed a prototype to, were constantly checking for assurances that we were not getting their credit card information anywhere in the process.
That gives me hope, you know? And this is why we need to continue to do primary research, and understand what value and trust and brand means to people.
We should constantly check in with each other as human beings.
We are adaptable.
What sucks today, we'll find a way around it, we'll get sceptical, and that might be the healthy thing to do.
Our behaviours constantly change because our barometers change.
There are things that we do differently.
Forget throwing away usability tests.
Instead, redesign them so that you gather information that has longevity.
Every piece of information that you get with your research is potentially a Trojan horse that you can use to funnel and bring the questions back to where it matters in the beginning.
And the second thing is something that I've said at all talks so far, all my talks so far.
Translate them into business insights for the people that need to listen at that level. Gather the evidence.
When you build something, does it erode the brand or does it, you know, give more trust to users to have in you? Gather the evidence.
Does it fit into the shape of the promise that you have made to the customer, as a business, and does it fit the shape of what they believe the promise is? Gather the evidence.
What are the perceived notions of value and cost? It's these types of evidence that, if you can embed them in every piece of research that you do, it's going to give you credibility and allow you to talk to the people who are making the decisions.
So, don't be afraid to talk about money, and be able to articulate these business models. If you're a government or a charity, and you don't want to talk about profit, use the vocabulary of sustainability.
Those are equally valid things, but we need to be able to talk about these and feel no shame.
Advertising, does it need to be this shape? Can it be different? Subscriptions, there are other forms of payment models and other forms of keeping the money coming in and keeping the lights on.
Can we talk about them differently, and how the people react to them, what they believe in relative to the offer and the promise you're making them? Those are the things that you're only going to find out if you talk to a range of people.
Other things are important and are kind of far more fundamental to how we run the processes.
Scaling up research in your team.
This is an article by Chris Avore.
It's probably easier to look up, Achieve More Research, More Frequently, and he's got some really good tips on how you want to be involving more people in your team. Plan to be more iterative, encourage cross-team participation, they're some of the things that we know, but he's sort of, you know, given some really good case studies around that. Operationalize.
Unless you have done research at scale, it's difficult to understand how much work we do in operations.
All the recruiting, all the scheduling, the setting of the lab, the tech.
And we need to be much more efficient at this if we want to be impactful with the things we find, and we also probably need to be far more rigorous in order to give us that level of evidence that we need to give to convince stakeholders. So, Steve Portigal, who wrote a book considering users, and many things besides, he considered efficiency as a Trojan horse. "By building infrastructure, we lower barriers "to entry and risk, "so people can learn from experiments and be persuaded "by examples of success." ResearchOps is a thing.
So, there is a news-like community that just started a month ago, and there will be some global conversations that we're going to facilitate, so you'll sort of, like, have a good framework of what Research Operations could mean.
So, there'll be a starting point, basically, of what we think we'll want to do as a discipline, and sort of advance research further in organisations and the things that we do. Just two more points.
We know about continuous learning, and I know that many in-house teams do it, but we do have a lot of throwaway research. Research curation now is a thing and I know that there are some teams that are trying to figure out how we do this better, how do we keep the insights better, and I've actually stopped using the word "research" as often.
I use the words, "gathering evidence." I use words like, "improving what we know about our customers," all the time. Once you're thinking that way, you've embrace the complexity of the landscape, and now you're better able to communicate the outcome, the longer term outcome, of what you're trying to achieve as user experience professionals.
Users are complex, you can't rely on personas. We felt, at three million users, personas just didn't make sense anymore.
You can't rely on traditional demographics. There are days these days that I doubt that we can even rely on purely behavioural characteristics, or purely jobs to be done.
I think it's a mix of those things.
Oh, sorry. (laughs) (loud laughter) So, when I say "target," I mean two things: your audience as well as the people you need to speak to. You might meet with a genius in your life that you need to speak to.
Who calls the shots? Who are the people you need to convince? You're gonna forget me by Monday because it's only Thursday morning, so when you go back to your world of work on Monday morning, try make a point, have a think, who is it that makes the decisions, and how do you need to convince them? Find ways to speak to them in your own language and be mindful of the evidence that you can bring to make the case for complex users and the lives that we live.
Thank you very much.
(loud applause) (upbeat tempo)