Putting Research in its Place

(playful music) - Thank you very much for showing up.

I hope you all enjoyed lunch.

I thought it was delicious.

We've had our design week at Atlassian this week from Monday through to Thursday.

The theme of the week was, "Open by Design," and so a lot of people, when they hopped up and did their talk, hopped up and did sharing stuff.

And some people...

There was a little recurring theme of people sharing their next tattoos.

So, I was like, "I can be on this," because actually my next tattoo fits very well in the theme of what I wanna talk about.

This is it.

I was really very close to...

Don't steal it by the way.

I don't wanna come back here and have everybody with the same tattoo.

No.

I was really very tempted to try to get it done in time for this week, but I had a big problem, which was this typography.

I cannot have that on my skin, but I can also decide what I do want to have, right.

How do you choose something that's gonna be like a tight face that you're gonna be stuck with for the rest of your life that you're gonna love in 20 years time? I have much more problem with that than I do with the actual kind of content.

If anybody has an expertise in tattoo typography, please come and see me afterwards because I really could do with a consultation. Hopefully next year I'll have it for real.

It's gonna go just there.

A very functional tattoo.

My other one is this one.

It's a little swan, and it is supposed to remind me to try to make my job look like it's easy, which apparently I am very crap at doing.

I make my job look hard.

There's a good reason for that, but it is what it is.

Anyway.

The reason that I want to get that tattoo is because I firmly believe that perspective is the user researcher's super power.

Perspective helps us to understand how we see what we see, and why we see what we see, and really importantly helps us to understand what's in the frame and what might be outside of the frame as well. I think those are really important things to be thinking about when we're thinking about how to do research and how much to trust research as well.

To understand perspective, I think we need to talk a little about truth. To tell you my story of how I came to understand truth, we need to go back in time to when the internet looked like this.

Does anybody else, except for John, remember when the internet looked like this? Yay, my people! So, you would remember, right when you had a very long e-mail address that was mostly made-up of numbers, and you didn't know very many other people who had an e-mail address, so when you got an e-mail from a real person, it was super exciting, but it was also pre-spam, which was cool too.

We mainly use the internet...

When I first started using the internet at university, you can date me right now, to check whether or not the book that we wanted to take out from the library had been taken-out by somebody else or not because we were lazy, and the library was a few blocks away, and there wasn't very much information on the internet at that time.

Believe it or not.

But there were super cool things about being around and studying New Media back in the olden days, as my kids call it.

We used to make things in HyperCard, which was awesome.

I still miss HyperCard.

My professor of New Media, I remember, one day early in our New Media one-on-one class saying, "Yes, there is the internet.

"I am not sure how much potential it holds, "so we will be focusing on CD-ROMs." In our theory subjects though, we were continually slammed over the head with Postmodernism.

I don't know with the...

I went UTS, University of Technology in Sydney, and they were deep in the grip of an obsession with Postmodernism I think that they probably still have.

Every single subject had this postmodern slant on it, which was really very confusing to me at first and then very influential over time.

Postmodernism is your second philosophy lesson for the day, so I hope you're into this.

The postmodernists, if you don't know, believed that God died.

I thought that was quite interesting that they believed that he was alive in the first place, but anyway.

They believed that God died, and with God dying we lost our source of objective truth, so apparently in the olden days God would tell us what the truth was, and we'd go, "Thank you, God.

"We'll now carry on with that truth in mind." When God died, theoretically I suppose, it meant then that we had to construct our own truths.

We all had to make up our own truth.

They were very conscious of the fact that where we came from in our life history, where we were born, what kind of privilege we had, had a big impact on what truth looked like for us.

People's context really contributed to what their reality was.

Of course, there were still some facts, this was pre-Trump.

We would agree quite happily that red could fit this definition.

But at the same time, we were really conscious that my version of red and your version of red are not necessarily the same.

I am not talking about color-blindness here at all. I'm just saying that I have no proof at all that when I see red, and when you see red, we are seeing the same thing.

Most of the time, unless you take philosophy too far, which I think this guy is probably doing, this is fine, doesn't really impact us, day-to-day, that all of our experiences of red could be slightly different.

That's completely fine.

But sometimes the perspective that we have on the world can have a really profound impact.

I'll go even further back in time to a great advertisement from the 1980s that The Guardian newspaper did.

So, look at this.

- [Narrator] An event seen from one point of view gives one impression.

Seen from another point of view, it gives quite a different impression.

But it's only when you get the whole picture, you can fully understand what's going on.

- That's an important message, isn't it? Skinheads can be kind.

Another kind of visual metaphor that I use to try and say the same thing but in a maybe slightly more humorous way is this one.

Alright.

What kind of scary sea creature do you think this is? Aw, look.

It's just a duck! How sweet.

I know.

Try and forget this now.

I dare you.

This is what I talk about when I talk about research all the time.

It is really important that we make sure that we know whether the thing that we're researching is the shadow or the duck because it's actually incredibly easy to accidentally think that you're focusing on the object when you're not actually, you're just focusing more on the effect rather than the thing that's causing it.

Nietzsche has a much more sophisticated way of expressing this.

He doesn't require ducks, and he says, "There are no facts, "there are only interpretations." I think he would absolutely be loving American politics right now if he were around.

Because I think that this is a really important message for us right now if we think about politics, if we think about social media, if we think about the media in general.

Being able to understand that there are multiple truths in play, that other people have a view of the world that is very true to them, and understanding why they might see the world that way, and how we feel about that is actually a really important thing if we wanna be able to get on with each other for a start.

But it's not cognitively simple to be able to do that.

There's a real process that we need to go through, that our brains need to go through, in order to get us to the point that we can hold these two things in isolation from each other and have a personal judgement around as well. I quite like this framework that William Perry came up with decades ago.

He came up with this studying Harvard Undergraduates.

I quite like using it on my kids, but it's a four-stage process.

As we go through it really quickly, I want you to have a think about where do you think you are on this scheme.

It's no judgement at all if you're not all the way to the end.

If you have got kids, play along, where do you think your kids are, we can compare it later.

The other thing that's quite interesting is where do you think your company is.

Where is your organisation on this as well? How mature are they in terms of being able to have a position on truth? How sophisticated is that? We start at the beginning with Dualism.

This is kind of familiar.

God said, "This is the answer," and we said, "Yes." Teacher said, "This is the answer," and we said, "Yes." Parent said, "This is the answer," and we said, "Yes." Sometimes I wonder why the kids today completely skip this.

They have to learn media literacy as they come out of the womb almost these days because there are so many different truths around. But this is the starting point.

I certainly remember being this person.

We move into Multiplicity.

Here we start to realise that actually people would give us different answers to the same problem, and sometimes there are no answers to the same problem, and that we can have an opinion as to which one of those two answers we wanna accept as being true.

Then it gets a little bit more complex again, and then we start to think, "Okay, so these two people "have got two different reasons "to give me an answer to this question.

"Why? "Why are their answers different? "What about them "is getting them to think "that this might be the answer? "What about them is trying to get me "to think that it might be the answer as well?" Both of those contexts are equally important.

Understanding that it's up to us to actually scrutinise these sources of potential truth is an important step as well.

Then we get to the fourth step, hopefully.

There we go.

Commitment.

It's a terrible name.

I think it makes a lot of people scared about actually getting to the fourth step, but anyway.

In the Commitment phase the thing that's really interesting here is that we then start to take a personal responsibility for what truth is.

We do that by applying values.

We go, "What are my values? "What do I think is important? "How do I use those values "to try to privilege some truths over others?" The other thing that I really like as well is this last little statement.

Recognition that acquisition of knowledge is an ongoing activity.

So, truth doesn't stay still.

The information that's available to us changes. We change as well.

We have to keep going back and re-interrogating what we think is true as well. I don't know where on this scale you feel, your kids feel, your companies feel.

I think there's a fair chance that most of us need to always be working at getting as far up the scale as we possibly can be, but let's for now just accept that there can be different kinds of truths, and some of them maybe are more meaningful, more important, more valid than others.

What on earth does this have to do with you? Why should you care about this? How does this help us get better at user research or how does this help us understand our users better or design better products? So what.

It's because of problem statements.

It's because of how we frame the problem.

This is the beginning of the design process, right.

Trying to understand what the problem is.

Being really conscious of why we choose to frame the problem one way and not another way is very important.

It's important, and we hear all the time, "Don't fall in love with a solution, "fall in love with a problem." I think we need to take it that step further and go, "How do we make sure we're falling in love "with the right problem?" Because it's nowhere near as simple as sometimes it seems.

How do we make sure that we do have the full picture and that we are focusing on the duck and not on the shadow? Let me tell you a story about a form.

Who recognises this form? Anyone? A medicare enrollment form.

How many of you have had a child? I had the privilege of working with a team who were working on this form.

In fact, two of them are here.

Hooray! This was the team that had service designers, and developers, and interaction designers, and lawyers, and technical architects, and security people, it was a wonderful multidisciplinary team who had come in.

The problem was that this was not a digital service.

Well, it's a PDF, and you can download it from the website, so according to some criteria for digital services in government, it could possibly qualify, but this particular department is raising the bar on what a digital service is, and a downloadable PDF does not cut it.

At the point that this team came in, if you wanted to enrol in medicare, you had to either get a paper copy of this or download and print it, fill it in, get all of your information, take it into the Centrelink office, hope that you brought all the right information. In an alarming number of cases that haven't got all the information go back home, get the thing they haven't got, go back to Centrelink again, and everyone loves going to Centrelink offices, don't they? Wait in the queue, see the person, they send it off, you go back home and wait for about six weeks, I think it was, and then the card would come, and then you would be able to access health care at a reasonable price.

Lovely.

Team comes in, and the solution is obvious from day one.

Digitise the form.

Yup.

Brilliant.

Except for one wrinkle, which is that we were working into the Digital Service Standard.

My slides just keep going showing you all of my things.

We were working into the Digital Service Standard, and that meant that we were interested in some design principles, and our number one design principle was this one.

Starting with needs.

It's got this asterisk that says, "User needs and not government needs." I had to add this slides in earlier because I need to defend this term.

I would use that referencing this great article that Russell Davies wrote some time ago called, "Consumers, users, people, mammals," where he says, "If you instead of saying users, "you say mammals, "does that make you feel better? "Does it make the whole thing work better? "No." And he makes this other lovely statement, which is, "Really, "if this is the level of discussion you're having, "You probably have got bigger problems." But also he says that it's actually a good word because it defines our responsibility, which is actually to make sure that people can use the thing.

I don't know what it's like in the part of the world that you're in, but in a lot of places actually just reaching that bar of usability is pretty hard, right? To reach universal usability is still a high bar for many of us, so I think, let's not worry about whether we call them mammals or humans or whatever. Let's move on.

Turns out that people don't wake up in the morning and go, "You know what, "I'm a bit bored today, "I think I'm gonna go "and enrol for medicare." If you do a little bit of research, you'll find out it tends to happen as a part of one or two narratives.

This is the first one, which is that people are coming to Australia, so they go through this whole process of filling-in forms, paying loads of money, providing an absolute metric tonne of information to prove their identity, often having to have health checks and all that kind of biz, and all of that happens before the point where we could give another form to say, "And now, "hooray, "fill-in this form "and give more identity information, "so you can get a medicare card." In that context, it doesn't seem like the ultimate best solution. The other scenario that we hear all the time was people having babies.

The baby obviously doesn't enrol, the parents enrol.

They're pretty busy doing other stuff usually at the time of the birth of a child and before that as well.

They've had a lot of interactions with government going, "What is parental leave? "Is there anything I can apply for? "What are my rights? "My employer is trying to sack me "while I'm on maternity leave, "what's going on?" Really similar in some ways to the coming to the country, to moving to Australia, because there's lots of forms, and lots of information that you have to provide, and lots of stress, but completely different parts of government that you're interacting with.

The team looked at both of these scenarios, and they said, "I think we can do better "than just making a digital form." And they did.

They discovered that actually the hospital collects all the information that you need when you're having a baby, and there's only two things really that you need to add to that.

One is consent, and the other is the name of the baby.

Once you've got those two things, then the hospital can automatically send the information off to DHS, and DHS can just process the thing, and these poor parents who are not getting any sleep who have got already plenty of forms and stuff to fill-in to have one less thing to worry about.

They don't have to try and work out how on earth you get a newborn out of the house and to a Centrelink office.

They can just get on with their daily life. It turns out it's faster and easier for DHS to process as well, so it's kind of win-win.

We knew that we had good product-market fit when the research was seeing parents deciding to name their baby before they left the hospital, so that they could participate in this programme. There's whole other talk about the ethics of that.

Anyway.

What this team was doing was privileging the journey, the experience, the context of the user over where they were in government.

Their bosses weren't placed, so they were out thinking about immigration, and thinking about maternity benefits, and all that kind of biz.

They thought that was completely irrelevant about the team because they had decided to privilege this story, they might have been slightly forced to do it by some of us, they quickly realised that there were much bigger, more interesting, more appropriate solutions that they could find if they just looked sideways a little bit from where they were.

This is the thing that's really tricky about research is that you always get answers to your questions. If you ask people questions, if you send out a survey, you do usability test, you'll always get answers, but they won't necessarily be the answers that you really need.

It's not that people are lying, it's just that you asked them about the shadow, they'll tell you about the shadow.

Why would they know that the duck is relevant to you at all? I'll use a little Atlassian example because that seems relevant.

This is a question that you may hear asked frequently around the place where I work.

What's the thing you do most often in Jira? That seems like a fairly relevant question to us, doesn't it? Hands up people who love Jira? Oh, bless you! You don't have to say that.

I know the truth.

We can ask this question if we want to, but it's very Jira-centric question.

It really does assume that our product is at the centre of your universe, which surprisingly to some people it's not, imagine that.

The other thing as well is that you could say that you could answer this question using analytics, and that's partly true as long as you literally want to know what people are using most often and you don't wanna know what their perception of what they're doing the most often.

That could be your duck.

Perception could be the duck.

Another way that you could ask it would be more like this.

What are you doing most often when you come to use Jira? This puts the emphasis onto the user and what the user's doing.

What task, what job, they are trying to get done.

That creates all of this space for them to tell us about where our product is not meeting needs that it could be meeting or where it's creating issues that we are completely blindsided on at the moment, we just don't have any visibility of that at all. It's a much better way to start thinking about what the real user need is and to be user-centered rather than product-centered.

This is an important point.

We need to try to be conscious of the inherent biases in all of the research that we do.

It's a complete fallacy that you can do research without bias.

There's always gonna be a bias.

You just have to have as much consciousness as you can of what that bias is and which truth you are looking to privilege. Have a really good sense of what your duck is. This is gonna help us to try to make sure that we're falling in love with the right problems.

That we're not just focusing on not falling in love with the solution, but we're also not just thinking that the answer to what the problem is is simplistic as well.

This is all kind of context that has helped me think about where in an organisation you might put a user-research team because in my experience where you are in the organisation determines what questions you get to ask.

If you're embedded in a product team or if you're in an agency that's responding to briefs from a product team, it's much more difficult for you to push the boundaries of that outside of what the product team are interested in.

I reckon there are six different ways that you can potentially deal with research in an organisation.

This is the first time I've put this up there, so it's probably wrong, and I look forward to you telling me what I've missed.

It's important to acknowledge the fact that it's an entirely valid approach that many organisations take to do absolutely no research at all.

The non-existence of research is one state. The next thing that I've seen a bunch of is this sporadically outsourced work.

Everything's going along fine, and somebody goes, "I've got some money," or, "I've got an interesting question," or, "We did a study like this in another company "that I was in some times, "we should get some researchers in "to do some research for this thing." Then the researchers go off into the wilderness, gather lots of goodness, sometimes you get to go and sit in the dark, and watch a focus group for a usability test, and have a glass of wine, and then a little bit later the team come back, they do a big amazing PowerPoint presentation that is so inspiring, and then we all go back to work, and we carry along like before.

I used to do that.

I was a consultant for a long time, so I was partied to this kind of work for many years.

It was very fun for me until I kept getting the same brief over, and over, and over again from the same company.

Then we move into the next stage.

I call this ad hoc or non-specialized.

This is where research is happening in an organisation, but not in a particularly organised way and usually by someone whose primary job is not to do the research. We see more and more of this with the risen rise of the UX designer.

There's a lot of debate around whether or not a specialised researcher is even necessary at all.

Clearly I have a view on that.

But this is a very common thing that we see, that there might be product managers, or designers, or who knows, all kinds of people in the organisation going off and doing a bit of research themselves. After a while, what tends to happen is that somebody in the organisation goes, "I'm the researcher.

"We've now got enough with this going on "that someone needs to be in charge of it, "and it's me, "and I'm going to centralise myself away "from all of you crazy people "to show that I am a specialist, "and you can come to me "and tell me "what research you want done." Then we end up with this kind of internal consultancy where we have specialist researchers in the organisation who are taking briefs from people across the organisation, somehow working out how to prioritise them, and doing as much of it as they possibly can themselves. I've seen a lot of this, and this are probably the most stretched and stressed researchers that I ever see in an organisation because they tend to divide their weeks into about eight different projects, and they're trying to do everything at the same time, and it's just impossible.

Meanwhile, all the product teams are just so unimpressed because they have waited a long time for this research, and then someone comes along who knows nothing about their product, and comes back with a research, and tells them stuff that they already know, and by the time they get to the research, they've moved on and made all their decisions anyway, so it's quite unsatisfying this model.

Something that we've seen a lot more recently is specialist researchers actually embedded in product teams. This has a lot of merit because teams, they are the cross-functional teams we all know work better together, researchers really get to know the product space and the team very well, they can have good communication, good access to the team, they understand the domain, they understand the users, they don't get caught with those tricky question anymore because they know the tricky questions, and they know the answers to the tricky questions, and they tend to be able to deliver the thing that researchers and teams love the most, which is the actionable insights.

Everyone loves actionable insights.

We know that they're doing that because we can look at the product they make and go, "The research told us this, "so we've done that." That's great.

Another one that I see a little bit of now is this notion of a centralised strategic research team, and this is a little bit like what we would've seen back...

This is not necessarily a new thing in organisations.

You would see research that's been done by management consultants, or research that's being done by marketing research, this much more high-level research that aligns with corporate strategy, but often is very distant from where the actual product line is.

This is another kind of opportunity as well. It's interesting to think about this in the context of maturity, and I don't know how many UX maturity models you've looked at. Natalie Hanson looked at loads of them and then unsatisfied came up with her own, which is this one, which I actually like a lot.

There's a lot in there, but it really does talk about...

We all know at the beginning when design just kind of happens by whoever happens to be doing it without really thinking about it at all.

They may go, "Is it pretty? "Can we make it shinier? "Can we polish it? "Make it pop?" Then we go, "But does it actually work?" Then eventually we start doing what the medicare team did in going, "But is this actually the problem "we should be solving? "What's the endured experience like "when we start thinking about service design?" If you look at those different approaches to research, you can map them onto this maturity curve a little bit where you've got bits and pieces or none happening over here, and then we start to actually get some stuff happening, and towards the end it starts to get a lot more deliberate.

The reality is there are really only two models to aspire to.

One is Embedded Researchers, and the other is a Centralised Strategic Research Team, but I wanna clarify that this one has to be closely aligned with products in order for it to be successful.

The question then for me is which one is the right one? The reason that this is really interesting for me is that if you would have asked me five years ago, I would have said a 100%, "Embedded Research all the way, "anything else is a complete waste of time." Yet, seven-eight months after starting at Atlassian, that's the opposite of what we're doing right now. This has been a real...

Quite a confrontational process for me to go, "Why am I doing this? "Is this right? "Why does this make sense?" Of course I made a two-by-two because that's what we do, and I started to think what are the characteristics of a company like Atlassian compared to the government work that we were doing. Why do I need such a different approach in Atlassian to what I needed when I was working in government? Is it me or is it them? Again, I'm showing you my workings here, so I really welcome you to comment, argue with me about this later.

This is what I think it's got to do with.

First of all, it depends on whether you're working in an environment where you're doing mostly transactional work, so the kind of projects where your customers go through a fairly linear process, linear process is the never linear process because you've got your happy path and the there's all of the deviations, but you get a bit of a sense that it goes kind of like that.

At the opposite end, you've got much more systemic, you're building a system that people can do things with, and that looks more like Jira.

The other part of the two-by-two is maybe about what's the general trajectory of the organisation. I really focus on trying to optimise that part, on trying to make sure that the most number of people get from one end of the funnel to the other end of the funnel and receive the service or buy the thing and maximise the revenue or whatever it might be. Or you're more in this space of going, "We're doing this thing right now, "but actually what if we did these things? "What else could we do? "Who else could be interested in this? "What else could you do with this thing "that we're making?" My gut feel is that if you're in this area, then Embedded makes a huge amount of sense, and if you're in this area, then Centralised makes a huge amount of sense. I haven't thought through the other quadrants yet, so we'd have to come back, but let's do a couple of applied examples.

So, Government Digital Service.

This was where we went all in on Embedded in a huge way, and it was massively successful.

It did so well.

Nobody was questioning the value of research there at all. I went from when I first started there, and we were just like, "Screw it, "either we can have a researcher on our team "or we don't have research at all," and everyone was like, "Oh god, really? "A researcher?" By the time we were down there, "We're splitting up our project, "we're gonna need four researchers.

"Oh, too much!" But if you look at the work that we were doing, it was very transactional.

It was very driven by this digital transformation agenda. There were 25 services that we were transforming, and there were things like applying for a power of attorney or renewing your passport, so quite transactional.

Complex, hard, had to work for all of the population, but the shape of it was linear and quite transactional, looked a bit like that.

That is very much the model that we use at GDS. It's a very Embedded model.

My next job after that was at the Digital Transformation, was a Office at the time, then they turned it into an Agency, which is just stuff about government that I'll never understand.

But again, similar kind of thing here.

To begin with the work that we were doing, it was very sort of transactional.

It was very inspired by that transformation approach, and we were doing things like getting an appointment for a citizenship test or getting a licence to import goods, so again, not easy things by any stretch of the imagination, but quite linear and very much focused on trying to make sure that the most number of people could get through it with the fewest number of problems and mistakes. In the early stages, all of our researchers were embedded in teams like that. But then we started looking at different stuff. We started thinking about the limitations of this transactional approach, and the opportunity that following an user through the complexity of their actual experience, of a life event for example, and how that could reveal more opportunities similar to the example of the medicare one that we gave earlier.

This was an example of some work that we did with an agency called Paper Giant, looking at the death of a loved one.

It was a very, very interesting study.

But this kind of work is way more like this. It meant that we had to add more to the way that we were set-up, and we started to have a bit of a situation where we had a chunk of researchers who were very much still embedded in projects, and then we had a few who were pulled back to look at these super services, these really centralised complicated systems. My next job after that is the one that I'm in right now, which is Atlassian.

I gotta tell you, I had no idea what I was getting myself into. Like any good researcher, I did a study.

I went around, and I interviewed a lot of people, and I did lots of observations, and I did an infinity sort where I've done all my notes, my post-it notes into the sort, and this is my desk area, and my study, and out of this I got a lot of good direction as to where the opportunities where, where the problems where, but the two big things that came out of that for me were these two.

First of all, we got a lot of demand, not a huge number of people, and we really needed to make sure that we chose to do the things that would get us a lot of bang for our buck. How could we get the most out of the small team that we had? And then the other thing that we had was a fairly small team who had been embedded in products, spread all over the organisation, hardly knew who each other were, many of them being researchers for the very first time at Atlassian, so I didn't have a huge amount of corporate experience, and so accelerating professional developments was a real priority as well.

Just to give you some context, Atlassian has more products than I have researchers, so that wasn't really an option.

That's just the beginning of what we do.

We used to do all of this stuff.

As Trace and Sarah mentioned earlier, we are very, very interested in understanding team work better, but we're also really interested in understanding openness, and what openness means, and how that helps to improve teams and how they work as well.

We have this huge number of big problem sets to deal with and a relatively petit research team.

I just wanna let you know I have clearance from Brand to use multicolor backgrounds on my maples. Thank you.

What on earth to do? I thought that I would share what our model looks like at the moment and see if this is interesting to you.

This kind of describes how research happens at Atlassian now.

We start with these User Centric Focus Areas because that was the first big most brave thing that we did in terms of restructuring the way that it works. It really hogs back to this thing.

The questions that you get to ask depend on where in the organisation you live. We actually pulled all of the researchers out of product teams, and we centralised, but we centralised in a very strategic way. We centralised in a way that really allowed the researchers or compelled the researchers to follow end-to-end experiences, like the full contextual experience of the users as they use our products or did things where they could have used our products but currently don't.

We really set out to make sure that we were being user-centered and not Jira-centered or even Atlassian-centered.

This allowed us to make sure that we got properly obsessed with the right order of problems.

That we were really understanding the context of our users, so that we could help inform the product decisions that our teams were making to help ensure that they were really shipping customer value. This is an example of a focused area.

We have a pair of researchers who are working on this.

Their job is to answer this question.

How do teams make decisions and prioritise work? Now, this maps directly to our corporate strategy around understanding team work and really helping to support teams, but you can probably also lay it back down and think about products that we have as well, so this is what Jira is supposed to be helping people to do.

People use Trello to do this all the time as well. It doesn't take a rocket scientist to see how the link happen.

We've tried to do this with a number of different focus areas that we've given our teams where you can really see how it maps to corporate strategy, so we can talk up at that level, but we can also provide a lot of inside back down to our product teams as well.

It's a really clear statement.

That we are gonna privilege the users over our products.

Be user-centered rather than product-centered. The other thing that we've done that I thought we completely made this up is this idea of Continuous Research.

We made this little picture and said, "We think we're gonna work in these six weeks, "and in these six weeks "we're gonna have particular questions "that we're gonna really dig into.

"After we do an interview "or a contextual study, "we're gonna do little postcards "where we just write it up "and share it back with the organisation "with some photos.

"Here is who we met.

"This is what was interesting." Then we could gradually build this body of knowledge where we don't wait until we're a 100% sure about something before we publish it.

We share what we're learning as we go along. Then I discovered that bloody Tomer Sharon had already been working on this, and he put it on Medium.

I thought, "That's brilliant." because that to me speaks to the fact that there are some zygotes happening out there in the moment, and this is very much about letting the users shape what we're asking in research and what's interesting in researching, getting very much away from this report-driven research function and much more into this building a big body of knowledge, of being prepared for the question that all research teams dread, which is, "What do we know about blah?" In Tomer's research model, he calls the unit a nugget.

You have a nugget of insight, which I think is kind of disgusting.

So, we've gone with hunches.

We call ours a hunch.

I quite like this because it's very humble in a way for us to go, "We've been out there, "we've seen a bunch of stuff, "we think this is a thing," alright.

Some of them are profoundly obvious, right. One of our huge insights, which as soon as I say it to you, you're just gonna go, "Duh," was guess what, teams prioritise backwards from a date.

Who knew? Of course, it's like all good research, as soon as you say it, you're like, "Of course," right, but it was something that never occurred to lots of products teams before or to us even.

We're building up these little hunches, and we can combine hunches in really interesting ways, and we can pull them apart, and we can re-interrogate them if we go, "We don't actually think that's true anymore." We have a big hunch around e-mail and how important e-mail is, which technology companies hate the fact that we're going, "E-mail is really important," they're like, "No.

"We're gonna fix e-mail.

"It's gonna go away." No, it's not.

Hopefully over time we'll be able to come back and go, "Okay, so the e-mail thing is gone now." We can take that hunch out and then understand how all of the different relationships change because e-mail is gone.

As if that's ever gonna happen.

Our Insight Lifecycle looks a bit like this, right. We conduct research.

Usually we're gonna go down the hunch path. Occasionally we'll be asked to do a particular report. Over time we have a level of certainty about this to the point that we're actually gonna call it an insight, and it all contributes into a big body of knowledge. Another interesting thing about the research team at Atlassian is that we have Research & Insights, and that Insights is the clue that we have taken on what used to be called the voice of the customer, so NPS, that's us, which is a mixed blessing.

But what it means is that we really wanna make sure that we're building out this strong qual and quant capability, and making sure that's used in a very balanced way, and making sure that all of the qual and quant work that we are doing is also very carefully tight back into analytics, which hopefully is something that I don't even need to mention because you all know that that's true and necessary, but it's worth saying.

The last thing that needs to be mentioned of course is that we've pulled our researchers out of product teams, what happens to those poor product teams? We've taken a philosophy, which is that the more people at Atlassian who were talking to customers, well, the better, so it's this kind of exposure alley.

The more people in our organisation who are really talking to end-users and potential end-users, the better that is for everybody.

That means for us that designers do an awful lot of product testing and usability testing, and product managers do a lot of customer interviews, and we encourage still everybody in the team to be as close to customers, and users, and potential customers and users as possible.

Doing that clear distinction between customers and users because customers already tend to like us a fair bit, they're already paying us money and inflicting our products on the end-users who sometimes have a more polarised view.

When we told the teams that we were gonna do what we did, I heard this pretty much everyday for months, which is, "Yeah, I get it.

"I understand why you're doing "what you're doing.

"But bloody hell.

"How am I gonna get anything done now? "Now I have to do all the research as well. "This is horrible.

"You've made my life very difficult." I showed this slide at design week, and about a dozen people came up to me and said, "You could've just put my name on there, you know," which I thought was really cute.

That means that we have a whole big trench of work that we need to do in order to support these people as well, which is incredibly important.

So, on the one hand, we have to do a really good job of making sure that we are offering advice, and support, and guidance to these teams, and there's a whole lot of time and effort that we need to be allocating to providing these support infrastructures for our teams. A very, very important part of that is sorting out recruitment.

I don't know how many of you do research, and so therefore how many of you share this pain, but this is the single biggest blocker to the distributed model working.

This is a big area that we're gonna really dive into a lot over the course of this year.

Then the other thing as well that we need to do a really good job of is to try to make sure that as the people who have responsibility for the largest scale feedback mechanisms like NPS, how do we make sure that they actually get really good value out of that as well? So, that's a huge opportunity.

When I reflect on this, the thing that I think about the most because I use it to try to keep me sane day to day is that doing this kind of work is not a popularity contest, right.

You don't make decisions like these, and you don't go out and do this kind of research because you're gonna be flavour of the month this month, it's a real investment in getting an organisation to eat its vegetables for the future.

It means that there a lot of difficult conversations in terms of taking people's resources away sometimes and saying no a lot.

There are also really difficult conversations that you get into when you start talking to different people about different things, but they are incredibly necessary.

But there's a strong possibility that this could be the tattoo after my perspective tattoo.

That's it from me.

Thank you! (applause) (playful music)