(audience applause) - Our next guest is Lucinda Burtt, who is Head of Product Design at Fairfax Media. And she's gonna talk a little bit about how data can be an inspiration point as well as a feedback point. I think that's really a big message that we designers as much as developers should take away, is that we shouldn't be scared of data, in the first place, but also, data can be used to validate our work as much as start our work.
So she will probably enlighten us a bit about how that works at Fairfax and how that works in her practise. Lucinda Burtt.
(audience applause) - Have you ever played the game Two Truths, One Lie? I think for some people, that works better for them than others.
If you've never played it before, it goes a little bit like this.
You say three things about yourself; two of them are true; one of them is false; and you jumble them up a little bit so that people have to guess which is the false one. So, for instance, before working in design, I was a radio DJ.
I speak fluent Spanish.
Last year, I joined the mile-high club.
I'm gonna let you decide which one of those is false, but I will reveal one of them which is true. I do speak Spanish, and during university, I went and studied in Mexico, and I had a moment when I was walking through Bosque de Chapultepec, which is a very large park in the middle of Mexico City, and I saw a women, 'cause it was a very sunny day, holding a sun shade, or a parasol, and I turned to my friend, and I said: "Ah, parasol, parasol!" And, what had happened is I suddenly understood the meaning of that word because I could now speak Spanish. So, in Spanish, the word para means stop, the word sol means sun, so when we join them together, we get a very logical explanation for this object, a parasol.
Of course, in English, we just learn this term, and we don't have a real sense of what it actually means. We're a little bit divorced away from it.
Spanish also works quite cutely when we're thinking about that other weather object we use.
Aguas, being water, if we join them together, we get paraguas, which is umbrella.
And the point I wanna make here is that when we come to the word data, we often bring with it a lot of assumptions and we bring with it our own feelings about what that words means to us.
We think about big data.
We think about being data-driven.
We think only about empirical data and numbers. We don't think about the full spectrum of data. We also have arguments about whether it's data or data. I'm gonna try and say data, but hands up, who thinks it's actually a data? Yeah, OK, so, maybe 50% of people.
I'm sorry.
(chuckling) But when we get back to the original definition of the word, data is all about factual information.
It's about information that we can use as a basis for decision-making.
I think it does make a lot of sense to involve this in our product-design process. As product designers, we need to be able to validate and articulate for our work when we're speaking to people in our organisations, in our businesses.
And we know that businesses love data because it feels concrete, because it's based on fact. But what I think can happen is that when we believe too much in our data or when we start taking that data-driven approach, we can get into that kind of world where data starts to tell us two truths and one lie, and we're not really sure which one is which. But today I really wanna talk about how I think data can be used in the design process. I think it definitely makes sense to have data when we're thinking about design thinking.
When we're empathising with our users, when we're empathising with a problem, we're going to be collecting data.
It's probably gonna be qualitative data points. As we begin to define, we're gonna spend time thinking about the metrics that we have, and we're gonna wanna actually use data to help us measure our success.
And then, as we move through those other stages, ideating and prototyping, we are definitely going to wanna get that sense of how is data letting us know how successful we're being at what it is that we're doing.
So I'm gonna talk about three things which I think really help when we're thinking about how we use data in the design process.
But all of these things can be taken for granted, but we need to also look into what is it actually saying behind the scenes as well. So I will put some qualifications in there. But the first one.
Data can definitely help us understand how a user actually uses the product.
And I really like this quote by Mark Twain that "Supposing is good, but finding out is better." It's great to use our assumptions, it's great to use our design gut, but we do actually want to get some sense of what is that concrete fact behind what it is that we're doing. When we're talking about collecting data, it's not just about numbers.
We've also got other ways we can go out.
In addition to things like A/B testing and our usability benchmarking, our eye tracking, we can also start collecting qualitative data when we're doing field studies, diary studies, maybe when we're going through a participatory design process.
It means that we've got this way of collecting both direct and indirect feedback, and we can get it through seeing natural use of the products but also when we bring people in and do more like a lab-based test.
We can also decontextualize our product, which can be useful for some things.
We also get insight into people's attitudes and behaviours, and this is a really big one, I think, especially working at a media company.
Because when we bring people in and we say to people: "Look, talk to us about what it is that you read "on the Sydney Morning Herald." After breaking news, after business, after sport, people always say, number one: "My favourite section is world news." And perhaps with the exception of the recent election, which has just captivated all of us, we know, through looking at our data, that people do not read world news, generally. I'm sorry.
You like your entertainment stories.
You like your life and style.
It just isn't there.
So it's very interesting, people have this attitude of: "Yes, I'm a world citizen.
"I love world news", but their behaviour is telling us otherwise. So, why would we be collecting this type of data, and what is it that can actually tell us? When we're in that qualitative space, we're thinking about the why, and we're also getting really deep insight so we can get a bit of understanding in how do we fix this particular problem.
We get to understand user logic because we have a very direct way of actually collecting this data. When we're in the quantitative space, we're asking questions about how many and how much, but we can also begin to test things.
So, through doing, say, a beta process, we will go: "Will people actually use this? "Will they actually use it the way we expect it to be?" We get to understand user patterns, but it's also indirect. We don't necessarily know why they're doing what they're doing.
Attitudinal, we already spoke about this.
Attitudinal's what users believe, or what they say they believe; and behavioural, what users actually do.
Getting into sort of the second piece.
Data is based on what you've already got.
And by that, I mean, whenever we're choosing to measure something, we can only measure what we have in front of us. We can't necessarily measure something which is a possibility.
It means that we have these sorts of difficulties when we're thinking about the way we're collecting data. That graphic is sort of showing people's perceptions of star ratings.
I think our experience of Uber is starting to change the way we think about star ratings 'cause we know that we're getting encouraged, that if we just have, you have a reasonable decent ride, that's worth five stars.
Previously, we would have thought: "Yeah, good is three stars.
"Excellent is four stars.
"Five, well, man, you're gonna have to work for that." So it is actually changing our perception.
When we're thinking about when we wanna get good data, we need a clear metric and we need a defined success point. What is our success? Is our success three stars? Is our success five? We need to also get sufficient users.
We need to get reasonable traffic or speak to a reasonable amount of people about the specific feature that we're testing. I think it's also important that we acknowledge systematic bias.
When we're talking about data, we are focusing on the micro; especially if we're in an optimization process. If we're thinking about the conversion rate of a particular button, we're getting very specific about that. We may be not considering anything else that's on that screen at that particular point. And by improving the engagement with that particular button, we may actually very well affect something else on that particular screen, so we need to be conscious of that.
I think it's also important that we stay in this world of being data-informed rather than being data-driven, with some thought to that short-term long-term tradeoff. So if we do end up increasing the effectiveness of that particular button, how is it affecting that overall experience? Let's talk a little bit about A/B testing 'cause I think that's one of the ways that design teams definitely use data in their process.
In my product-design team I have, in addition to UX designs, I do have a data analyst as well as user researchers. When is it actually appropriate to do split testing or multivariate testing? Definitely when you're going down the optimization path, when you wanna get a tactical improvement of a specific feature.
That makes a lot of sense.
I think it can also help for validation as well. If you really wanna get a sense of the wider impact of your design decision, is that thing that you're to do actually going to have a long-lasting benefit? I also think it can be really useful as a stress test as well.
When we're coming up with different designs, we don't just come out with one idea, we come up with various ideas.
We can do a quick sanity check by just putting it out onto the site and seeing whether or not we get the uplift we expect. What to A/B test first? I'm sorry to put up an image of a pie because I know we're all sort of sitting there, thinking: "Ah, man, when's lunch coming along?" But it's gonna help you remember the PIE framework. PIE stands for Potential, Importance, and Ease. We use this to assess what should we actually A/B test 'cause we could A/B test an infinite number of things. Potential: how much improvement can we actually make? Importance: how valuable is the traffic at that particular point, or how valuable is the thing that we're trying to optimise? Ease: how complex is the test to implement? There are some things that cannot really be tested when we're thinking about A/B testing, so whenever we have product stakeholders come and say: "What is the most effective logo?", I would kind of argue that click-through is not really a great way of actually testing that. I think it makes much more sense to bring people in, do a bit of a usability lab, and get a sense of are people really responding to that from more of a personal level? When we assess it, it looks a little bit like this. We definitely spend more time thinking about the potential and the ease of that particular test, and then we kind of work with our product stakeholders on the importance piece.
An example of when we really used A/B tesing to kind of shape our product experience was when we were doing the relaunch of the Sydney Morning Herald earlier this year. What we had previously done is we had an m-site and we also had a desktop experience, and we decided to move to a responsive adaptive site. So it became screen-agnostic regardless of the device that you were on.
One of the things we heard when we were speaking to people is they said: "Absolutely, 100%, "I want precisely the same experience "regardless of the device that I'm on", so "I wanna get the full amount of information "whether I'm on my mobile, whether I'm on my tablet, "whether I'm on my desktop." So we thought: "OK, great, "that makes it really simple for us." We chose to do that.
We chose to include the write-offs, which is that extra piece of information that appears underneath the headline.
The write-offs give you a bit more of an intro into the article that you're gonna be reading. So we decided to implement that on mobile which previously wasn't there.
What we saw when we went to beta was that we were getting really low engagement in this breaking-news area, which is very unusual, especially for a new site.
It wasn't affected on desktop, but it was hugely affected on mobile.
So we had to sorta hypothesise that we were causing decision paralysis because we now had too much text at that particular point, so we decided to do a pretty straightforward A/B test.
We continued on with the control, and then we brought in a new design, and we actually tested this on our partner news site, WA Today, 'cause we were then able to get a sufficient number of users to expose it to. The new design, really simple.
We took away that write-off, and lo and behold, we got a huge increase, or we saw that uplift back to this usual kind of engagement that we expected. And then what we saw, 'cause we kept the test running, is we saw that we also got an uplift on our tablet, and we also saw no negative impact on our desktop. That actually prompted the newsroom to say: "Well, look, it would actually be really useful "for us to standardise our workflow.
"We would prefer to have just one piece of content "that we push out, and we will work to make our headlines "more meaningful so that they can stand by themselves "without that write-off." So that's what we ended up doing, and we sorta showed through data that that was OK to do that.
So we kind of got both an optimization in terms of our users, so they were able to select the stories and able to easily parse that information. But we also got an improvement, thinking about our workflow, which was great. Something that's interesting, though, is that we are looking at the long-term consequence of this particular decision. 'Cause we did get some feedback, and that was feedback just coming in through emails, saying: "I'm kinda feeling that you guys are going a bit "tabloidy because I'm not getting the full information now, "only seeing the headline." That's a really interesting one, so we're still trying to think through, OK, what is that kind of long-term thing, if we're thinking about our customer engagement. So we're still waiting to decide what is really the outcome of that particular decision from that point of view. So that kinda shows how we did get what we wanted, we did optimise this page, but we may also have a knock-on effect.
It gets into my last point about metrics really defining what you value.
This is a tricky one because they can do that but they can also lead to that shortsightedness. I think Andrew Chen summed this up really nicely when he said that: "Metrics are merely a reflection "of the product strategy that you have in place." So if you're talking about: "OK, right now, "we want engagement, we want click-throughs", that's really different if you were applying a metric such as your NPS, your Net Promoter Score, which is talking about people's feelings or attitude towards that particular product. How likely would they be to refer that on to a friend? That's a really different thing.
So it really does depend what that product strategy is. And then it moves us into this place where we have some data-driven pitfalls.
We can go down the path of micro-optimization, where one metric becomes to the detriment of all others. If we have a real undefined hypothesis, if we have a real unclear goal of what it is that we're trying to do, we're also not gonna get the kind of result that we want.
There can also be cases, and I talked about this before, where some metrics are just inapplicable, such as for testing a logo design.
So not every product problem truly is an optimization problem.
We cannot necessarily optimise everything that we do. We also get another really interesting thing, and Andrew Chen also talks about this, about achieving the local maximum.
That's basically when we have iterated and we have already hit that optimal improvement for what it is that we're doing.
We've kind of maxed out the design potential. At that point, it would actually be better to explore new opportunities, to sorta go back through that design-thinking process. That's something to keep in mind.
So, it's important for us to start thinking about practising safe data-informed design.
Focus on key decisions.
Focus on things that are testable, things that are tactical.
Define your hypothesis.
Use a hypothesis statement that actually gets those metrics in there, so that you're being forced to kinda say: "OK, this is how we're gonna measure it", and then you're not going to measure something which is actually unmeasurable, immeasurable. Choose the metrics wisely.
So, take that balance of the micro and the macro. And also, if we're thinking about A/B testing, you've gotta start small, and you gotta be strategic, and you've gotta compare like for like.
If you've got something that you've previously done and you're bringing something completely different, you've gotta be conscious of that.
You're actually in that NVT space.
You're definitely not A/B testing at that point, and you've gotta think about what that control is so that you've only got a small incremental change at that particular point.
It's also important that we also speak up and we acknowledge what we can't test.
In conclusion, I'd like us to get to this place where we're being data-informed.
We are going out.
We are speaking to people.
We are collecting data; whether it's through numbers, it's through engaging with people, but we're not driven by it.
We still use our minds and we still use some of the intuition that we have as designers. Thank you.
(audience applause) - Thank you, Lucinda.
- Thank you, Tim.
- Thank you for a very rigorous approach to data and design and development and product development. Maybe we'll just sort of slide out here.
- Do we wanna take a seat? - Oh, we can even take a seat.
Shall we take a seat? - Thank you.
- I think your approach also showed how much of an impact you have at the organisation.
You mentioned that people outside of your team would then say: "Why don't we do this all the time?" Is that your perception now as well? - Yeah, absolutely.
I would definitely say the business loves us when we can actually bring something back in data. And the tricky thing is for us to really go: "OK, yep, there are some things, "especially in their optimization space." It completely makes sense for us to use data and to use A/B testing as a method.
But when we're in a space where we're thinking about a completely new feature, it actually doesn't quite make sense for us to take that approach, and we can end up in that rabbit hole or we could end up like, I didn't talk about the 52 shades, 42 shades of blue that Google went down.
That is a place where I think data can be used to the detriment of product improvement.
- Yeah.
Last year's conference had a guest, Alisa Lundberg from Twitter, and she talked very eloquently about how informed data experts were part of the product design team. Are you sucking up on data-informed people? - Yeah, I'm really trying to hang on to the (mumbling) that I've got.
I actually think it really makes sense as part of the product-design stack.
But obviously, we also have internal teams that work on data and business intelligence, and they are going: "Oh, we'd really love to have that person here." But I think it makes much more sense, I think, at that point, when you're talking about product optimization for us to still have it in the product-design mix.
- Yeah.
I think there's a great opportunity in all of you who are data-interested to scale up and maybe use internal or external training for that. I wanna thank you for a very informed talk. Now, it was so comprehensive.
Is there any chance that people can get those slides? - Absolutely, I'd be more than happy to share. I'd also like to point out that even though I do a lot of travel, I have never actually joined the mile-high club. (audience laughing) - Thank you very much.
- Thank you very much too.
- Thanks, Lucinda.
Cheers. (audience applause)