Potato, Potato: Building Connection Between Product and Design

(intense upbeat music) - Hi, my name's Jane Davis and I'm Head of UX Research and Content Design at a company called Zapier.
And I'm giving this talk because you are all slowly killing me.
That's not because you're addicted to Gantt charts or designs friends, although you definitely are addicted to those things.
And that's fine, it's important to have hobbies. It's actually because I've been watching product teams over the past eight years, full of smart talented people with good ideas, spend weeks or even months, failing to make progress on solving the problem they're trying to solve.
I know we can do better than that.
So today I'm going to talk to you about how. Over the last several years of my career, I've spent a lot of time helping product and design get on the same page.
And while I think I'm pretty hot stuff.
After several years of doing this, I've come to the conclusion that none of it requires my particular magical touch. Anyone can do this.
And that made me ask, why am I a researcher playing this role in the first place, and how can I teach other people to do it, so that I can do the parts of my job that do require my particular magical touch. So that's what I'm going to do in this talk, walk you through how to identify and resolve those misalignments between product and design, and give you a process for avoiding them.
So first off, why is this talk being given by a researcher? It's a great question and it's one that I've frequently asked myself when working with product and design teams.
What is it about my background as a researcher specifically, that's enabled me to align these functions and situations where they've struggled.
So after reflecting on it for some time and rejecting some of my initial hypotheses as being needlessly obstructive, I took a step back and started treating it like I would treat any research project. Because after all that is one of the things I do better than other people.
Pattern recognition.
So what patterns was I seeing over and over again? First I noticed the teams weren't explicitly stating what decisions they were trying to make.
The agenda was usually something vague like, decide how to proceed on experiment Y.
Second I noticed teams having the same conversation over and over again, they would meet, reach what they thought was an agreement, and then start all over again the next time they were together.
And third, I noticed that people were frequently using a specific solution to stand in for the general idea they were trying to convey. Kind of like saying we need to clean out the fridge, when the problem is actually that the kitchen smells bad. That might seem like a pointless distinction, but it actually becomes extremely important when you're trying to talk about the same problem space. So if you think about it, we need to clean out the fridge is one possible solution to the problem of the kitchen smelling bad. It might be the most logical solution.
The other solutions might be things like, let's buy a new house or let's throw away the fridge, but they are technically also solutions, otherwise known as tools.
So, in all of these situations, the teams were talking about things that were actually tools for solving a problem, rather than the problem itself. And that final observation, was what really unlocked this for me.
When you actually start to dig, the first two patterns I'd been noticing, were higher order functions of the third.
At the end of the day, nearly every struggle between product and design that I was working on, boiled down to this. The people involved were talking about tools, not goals. This can be hard to understand in the abstract, so I'll give an example.
A few months ago, I was rummaging around in our home workshop to find a drill so that I could put up new house numbers.
Now it was pretty clear that the drill is a tool, where people tend to get tripped up, is that the house numbers are also a tool, putting up new house numbers isn't an end goal, house numbers are a tool.
The goal here was to enable people to find our house when they needed to, whether that's because they're delivering a package or dropping off some books they borrowed, or at least pre-pandemic coming over for dinner. But because of the way I initially stated it and framed this discussion, where I was looking for something that was very literally a tool, it was easy to assume that putting up the house numbers was my end goal. As described, this doesn't seem like it could possibly create the kind of strife and deadlock that product teams can find themselves in.
After all isn't it the important thing that I got the house numbers up.
So to see why that's not the case, let's look at how this could translate to a product teams work.
So I have a background in growth, so a lot of the things that I give as examples are around driving activation and conversion and things like that, today's gonna be no different.
So let's imagine that you and your team are trying to get more users to activate during their first 30 days, you look at the analytics and you see that people who watch your formatted intro video all the way through, activate at twice the rate of other users.
So what do you do? You create a flow that's designed to get more people watching the video and tries to get them to stick with it, and then you sit back and you wait for your dazzling numbers.
Sub to your numbers, don't improve.
In fact, your activation rates starts tanking. And the team starts arguing about what changes you need to make to the experience.
You spend a weeks going back and forth on how to get people to watch the video all the way through.
And none of the things you try are working. The problem is, the videos that what's driving the activation rate.
If you dug into it, you'd discover that users who watch the video, and then activate all have the same use case, which happens to be the example use case the video walks them through.
But watching the video has gotten so conflated with activation in your team's mind that they aren't able to have the right conversation about it.
They need to be talking about what causes users to activate in the first place, how to drive it.
And instead they're bickering about things like auto-play, which no, and whether it's okay to prevent users from closing the modal.
It's not, user should always be able to close the modal. And because they're not having the right conversation in the first place, it will be impossible to identify a winning way forward, because all of this is just people arguing about their opinions.
No one's talking about the right outcome, the activation itself.
So to go back to our house numbers analogy, your team is stuck arguing about what colour the drill should be, instead of how to enable the delivery person to drop off your damn pizza.
The key to doing great work, on big problems as a team, is to have those right conversations in the first place. To do that, you need to establish a goal that's actually a goal, not just a higher order tool. Now, if we were doing this in person, this is where I would make an ill-advised attempt to getting you the audience to participate in a token way. And I then proceed to say whatever it was, I was already planning to say, regardless of how the audience participation segment went. Since we can't do that, instead we'll just have 10 seconds of awkward silence, while you imagine we're doing it.
And, then we'll all appreciate that there are some distinct advantages to a remote conference.
Great, so let's get back to it.
Given that higher order tools like the house numbers in our video, or the video in our example, aren't always obvious as tools, how can we identify them, so that we can have the right conversations and avoid getting deadlocked? First, we have to anchor on the outcome we're trying to drive, the real outcome.
So the first thing I do in any situation where teams are struggling, is ask them what measurable outcome they're trying to effect.
If they can't tell me that, then we've got a very clear reason for the struggle, which is that, there's no actual goal.
Unless you are trying to drive a measurable outcome, you don't have a shared goal.
But if you do have that metric or that measurable thing that you're trying to achieve, then we've got a starting point for the rest of the discussion.
That outcome, or that metric, that's our shared goal, and everything needs to come from that.
So unless we start by anchoring on that common thing, that we can measure, that we're all trying to achieve, we're not going to make any progress.
So the first thing I always do with a product team that's struggling to get along between product and design, is ask them what metrics they're trying to change with a given project.
And if they don't have an answer, then we know where we need to start, defining a metric, otherwise known as a shared outcome, otherwise known as a goal.
But metrics aren't enough.
So you need to translate your outcome, or your goal into your user's goal.
So you've got to establish that shared metric, but I'd be really disturbed if a user ever came to me and said, "Yeah I'm trying to activate, while they were giving feedback on an onboarding experience." One, it would tell me that I've got a ringer in my research panel, but two, that's not a user goal, users don't care about activating, users care about achieving their own ends. So at the end of the day, on the product side, we're using activation as a proxy for user being able to do what they're trying to do with our product. Activation is a really handy way to talk about that, it's a useful shorthand, but it can result in us being disconnected from what users are trying to do, and from what their goals are.
So to start making progress in the discussion once you, about ways you might drive your metric, I recommend going back to it and restating it in your user's words.
Your users don't wanna activate, they want to successfully use your product to do something specific.
Activation is how we measure the organization's work. But reframing it from the user perspective, can get you out of a deadlock by reminding you of what you're trying to do. Enable your users to engage in a specific behaviour, or to accomplish something specific.
So you've gotta translate your outcomes and your metrics into what your users are actually trying to do. And third once you've rerouted yourselves in this shared metric, and what it looks like for your users, you've gotta start asking why a given idea might or might not help them achieve that goal.
This one is so important that I added an annoying keynote animation so that you will really, really remember it. So this step looks like asking why instead of what. In our activation example, the question isn't, what are users who are more likely to activate doing? Because we just get the answer watching this video. The question we should be asking is, why does that help a user successfully start using our product? So every piece of information we're digging for here is about how a tool like our video contributes to our ultimate goal of increasing activation, which is our proxy of course, for user success. So here's what this process might look like in our video example.
So one, we ask, why are users who watch this video more likely to activate? The answer to that could turn out to be, they got information from this, that they couldn't find anywhere else.
Okay, but that information is also just a tool. So why does that information make them more likely to activate? Well, we could ask them and it would turn out that tells them how to do something specific with our product.
Okay, but knowing how to do something specific with our product still isn't actually a goal. They want to do this specific thing.
So why does doing something specific with a project, make them more likely to activate.
Because that was their job to be done.
That's why they signed up for our product in the first place.
And so in our hypothetical here, at the end of this process, you've uncovered the real reason, the video is driving activation.
Because for a subset of your users, it's showing them how to do exactly what they came to your product to do.
And you've also got a reasonable hypothesis, for why driving more people to the video, isn't driving more activation.
Because those people don't have that specific use case. So the video isn't relevant to them.
So now you know what might actually drive activation, helping users find and get started with their specific use case.
You've established your goal, and now you and your team can start talking about the different tools you might use to achieve it. So that was a lot.
Let's go back and recap on our handy recap slide. So first, start from the ultimate outcome, as defined by your metrics.
You have to start your discussion by agreeing on the goal you're trying to get to as a team. Next, define what that outcome looks like from your user's perspective.
So how does your goal translate into user behaviour? And third, go beyond what users do, focus on why they do it, and how it helps them achieve their goal.
So there you have it.
The secret magical way, you can stop driving your friendly neighbourhood researcher around the bend. Now go fourth and set goals, not tools.
(upbeat music)
Product teams can benefit tremendously from both experimentation and research, but it can be difficult to understand when to apply each approach. This session will run through a framework for deciding when to experiment and when to do research and help people understand how to apply it to their own work.
Potato, Potato: Building Connection Between Product and Design
Jane Davis: Head of UX Research and Content Design – Zapier
Keywords: tools vs goals, metrics, process, reframing, misalignment between product and design.
TL;DR: Jane has applied her proficiency in pattern recognition to identify a key misalignment between product and design – teams focusing on tools rather than goals, and often conflating the two. This can be solved through using a process of clearly delineating goals by first defining the metric you are trying to effect, then identifying the behaviours that drive it and reframing the discussion around what the user is trying to do. She walks through this process with a number of examples identifying where and how misalignments have potential to happen and how to reframe your conversations to resolve and avoid these.
Jane is giving this talk coz we’re all slowly killing her! [gif of Madeline Kahn from Blazing Saddles: Goddamit I’m exhausted]
Why? Not because you’re addicted to Gantt charts or design sprints, although you definitely are (and that’s fine, it’s ok to have hobbies ;-), but because for years Jane has watched product teams full of smart talented people spending weeks and even months failing to solve the problem they’re working on.
We can do better! Jane’s here to discuss how. She has spent a good chunk of the past ten years helping to align product and design. She’s pretty hot stuff! But it’s not her particular magical touch that’s the key – anybody can do this.
Ergo: Why is she, a researcher, playing this role in the first place? And how can she teach others to do it so she can focus on parts of her job that do require her particular magic.
Goal of this presentation: Walk us through the process of how to identify and resolve the misalignments between product and design, and give us a process for avoiding them.
Again, why is this talk being given by a researcher? What specifically about being a researcher has enabled her to align these functions?
After rejecting some initial hypotheses…
- maybe I AM magic?
- Everyone = bad at everything??
- Product:Design – natural enemies? (like bears: sharks)
She took a step back to treat how she would treat any other research problem because that is one thing she does specialize in: Pattern Recognition.
What patterns repeat? (In design/product misalignment)
- Vague, unspecified agendas – teams weren’t explicitly stating what decisions they were trying to make
- Having the same conversation over and over, reaching an assumed agreement but starting over again the next time they met
- Using specific solutions to stand in for the general idea or larger goal they were trying to convey
This is like saying We need to clean out the fridge when actually the whole kitchen smells bad. This distinction matters.
Cleaning out the fridge is ONE solution to the problem (=the kitchen smells). It’s the most logical solution, (as opposed to Let’s buy a new house or Let’s throw out the fridge, which may not be as logical but nonetheless are solutions, otherwise known as TOOLS.
Teams often discuss tools for solving the problem, rather than the actual problem. So the first two problems (vague agendas and repetitive conversations) are actually higher order functions of the third problem (using solutions to stand in for goals). This is the key. In almost all cases, people were discussing tools not goals.
Let’s move from the abstract to a concrete example: Jane was looking for a drill at home to put up the numbers on the front of her house. The drill = the tool. But the house numbers are also a tool. Putting up house numbers is not the end goal – house numbers are a tool. The goal is to enable people to find the house. But because of the way she framed the goal originally: I was looking for a drill to put up house numbers, it’s easy to assume that the goal was to put up the house numbers. This does not at first seem like a big problem.
Isn’t the important thing that she got the house numbers up? No! Let’s translate this to a product team’s work. We’ll use a growth example:
Let’s imagine your team are trying to get more users to activate within their first 30 days. Analytics show that those who watch the 4 min intro video all the way through activate at twice the rate of other users. So you create a flow designed to get more people to watch the video and to stick with it, then wait for the numbers to correlate. Except they don’t. In fact, activation goes down. Team starts arguing about what changes need to be made to the experience. Weeks are spent trying to figure out how to get folks to watch the video, but nothing works. Why?
Because the video is not what is driving the activation rate. If you dig deeper, those who activate are all people with the same use case, which is the same use case the video walks them through.
Watching the video (tool) has gotten conflated with activation (goal). So we wind up arguing about a specific tool instead of the goal that tool is enabling.
In the team’s mind, activation has been conflated with watching the video, rendering them unable to have the right conversation about it. Instead, they need to be talking about what causes users to activate in the first place, and how to drive that. But because they can’t have the right conversation they will simply waste time arguing over opinions rather than the right outcome which is the activation itself.
Applying this to the house numbers situation, the team is arguing over what colour the drill should be, instead of how to enable the delivery person to drop off your damn pizza!
The key to doing great work as a team is to have the right conversations. You need to be talking about something that’s actually a goal, not just a higher-order tool.
Distinguishing between tools and goals: If we were meeting in person, this is the part where Jane would get us all to participate in some token exercise then say whatever she had planned to say regardless of how that exercise went. But since we are virtual, let’s take ten seconds of awkward silence and imagine we’re doing it and then we can appreciate the advantages of a remote conference.
[Jane stares at the camera for 10 seconds]
Back to the talk! Given that higher order tools (like the house numbers or the video) aren’t always obvious as tools, how do we identify them?
By anchoring on the outcome we are trying to drive. (The real outcome!)
- Metrics
- Metrics
- Metrics
- Have you considered metrics?
- Let’s try metrics!
What clear, measurable outcome are you trying to effect? If you can’t answer this, you will struggle. No shared measurable outcome = no shared goal. Once you have a shared measurable outcome, you have your anchoring starting point.
Defining a metric = a shared outcome = a goal.
Metrics alone are not enough. You need to translate your outcome/goal into your users goal. Three step process: Define your metric; identify the behaviours that drive it; reframe the discussion around what the user is trying to do. This reframing is important – no user is going to say I’m trying to activate! while giving feedback on an on-boarding experience.
On the product side, we’re using activation as a proxy for allowing the user to do what they are trying to do with our product. ‘Activation’ can be a great shorthand, but can disconnect from what users are trying to do.
Users don’t want to activate, they want to use your product to do something specific. Activation is how we measure the organization’s work, but reframing it from the user perspective can get you out of a deadlock by reminding you what you are trying to do, namely: to get your users to engage in a specific behaviour or accomplish something specific.
Translate your metrics into what your users are actually trying to do.
Once you’ve re-routed yourself in this shared metric, now ask why a given idea might or might not help them achieve that goal. Look at the reasons underlying your users’ behaviour, not the behaviour itself. [Jane has added an annoying keynote animation so that you will remember this point!]
This step looks like asking why instead of what. The question wasn’t: What are users who are more likely to activate doing? (Because the answer to that – watching a video – is wrong.) The question is: Why does (watching the video) help a user successfully start using our product? This process is about figuring out how each tool contributes to our ultimate goal (activation, which is proxy for user success.)
Walking through the process: Why are users who watch this video more likely to activate? Possible answer: They got information here that they needed. Yes, but that information is also just a tool, so… Why does this info make activation more likely? Possible answer: It tells them how to do something specific with the product. But knowing how is not the specific goal, they want to DO the specific thing. Why does doing something specific with the product make them more likely to activate Possible answer: Because it’s showing them exactly how to do what they came here to do. Now you know what drives activation, you can talk about different viable tools.
Handy recap slide!
- Start the discussion by agreeing on the goal you’re trying to get to, as defined by your metrics.
- Define what that goal looks like from your users’ perspective.
- Go beyond what users do – focus on why they do it and how it helps them achieve their goal.
There you have the secret magical way you can stop driving your friendly neighbourhood researcher around the bend! Go forth, and set goals not tools!