(upbeat music) - Let us get going, the hamburger menu.
So the hamburger is, it's a ubiquitous design pattern regardless of how you view it.
But it's been around for a long time and everybody uses it and just as a matter of interest, does anybody know when it first appeared on your screens? Anyone have a guess? No? 1981, a guy called Norman Cox at Xerox produced it. He was looking for something that signified three things in a list and that pretty much signifies three things in a list.
Right, so growth design, what's growth design? Surely if we're building something and it's pleasurable to use, that achieves our customers' goals as quickly as possible they can get in, get out, do what they're doing as fast as possible then our systems will grown organically anyway and yes that's true, that's the case, but what I'm going to be talking about a little bit is using metrics to assist us in growth design. So basically how we can use qualitative and quantitative data to inform the way that we design and that's really key, it's not growth hacking, it's not just relying on the numbers alone it's using it as an arrow in your quiver as a designer. So what are we gonna look at? We're gonna look the sort of metrics that you can use to complement your design. How you can start using data within your design process now, the process of designing experiments in a growth environment which is slightly different to normal design, what to expect from running experiments and also I'll give you a couple of tips as to how you have can outsource successful growth design. So when you're designing experiments and you're using data, you need to have a hypothesis. This is kind of really important.
So the basic hypothesis structure is this.
So we believe that by implementing something new, some capability, some functionality, this will result in a measurable outcome and that's really important.
So this specific measurable outcome would usually be a numeric value, a percentage increase, something like that, and that's your success metric. You know, you need something that's statistically significant and that's something I'll touch on a bit later on. Because we have some sort of rationale that says this will result in it.
So what sorts of data can we use? A lot of you probably do qualitative data already, again I'll go into that a little bit more in a minute, and quantitative data, so things like A/B testing and stuff which if you have a marketing team they might possibly already run A/B tests using email campaigns and things like that. So they're not unfamiliar concepts but I'll go into them a bit more.
So qualitative and quantitative, I thought I'd try a little bit of one of Mandy's emashis. I really shouldn't've.
I think it's fair to say as Mandy said yesterday, some emojis were just not meant to look together. So qualitative testing, a lot of you probably already do this.
User testing is qualitative testing.
You might build a prototype of something you're thinking of working on.
Take it around the office, ask a few people. If you ask 20 people what they feel about it if 18 people say they like it and two people say they don't then you've got percentage numbers around that. So it's a really really good way of highlighting problems or issues in your design early on.
Quantitative testing, in this context we're looking at A/B testing.
So you take a big bunch of your users you split them into two cohorts you serve up your new shiny stuff to your B cohort and keep your current product as your A cohort. So if in the normal design process you're just serving up your new shiny stuff as we all do, then effectively you're losing control. You have no control because you're just pushing out the new stuff.
That's a really really bad bad joke about statistics that I really shouldn't have included but... So, Mikey's five F's of Design.
We have Foundation, Fast, Feature, Facts and Future. And I'll go into each of these.
And this is, as I say, it's purely drawn on my experience over the years.
Just different modes, mindsets of design that you might find yourself in.
They can work together, they're not sequential, it's just my five F's.
So foundation, so this is where you would be building the sort of, the literal blocks, building blocks for your products.
So you might be putting together a pattern library. You wanna make sure that you're considering the accessibility aspects of your products. This is the perfect time to do that.
If you are working with developers Then this is probably a very good time to get them to include in their workflow steps an accessibility step, so that there's a definition done, at a point where, okay we've finished building this now, let's sign off the mouse, just use keyboard, maybe turn on a screen reader.
Once you're happy with that then you can say that this is ready to produce and put into production. You wanna be working with your IX team or your marketing team to make sure that there's a specific tone and voice for your company so that when you're building experiments or you're just writing features that the language that you use in those is consistent and is on brand.
I missed one.
You might be putting together a design system and also analytics, you can see this is a really good point to put analytics into your product.
So analytics in Foundation.
You wanna choose the right service that's for your product, for your team and indeed for your budget.
And that's a fairly important thing.
So a lot of companies will choose Google Analytics because it's free and out of the box Google Analytics can provide your product with really really useful information about time on site, how many people are coming in, how long they're staying on your product.
But they don't, Google Analytics doesn't really provide you with information about how people are using your product. It doesn't provide you with click events on specific buttons or anything out of the box.
You can configure it to do that, but it requires effort on your part.
So just make sure that you get things that suits your product team.
And at this point you wanna make sure that you're defining everything you wanna capture. So if you're really interested in the onboarding process into your product then you make sure that all those click events are tracked.
So I put together a bit of a big old list of tools, that URL, I'll put it up on my final slide as well and I'll stick it on Twitter afterwards as well. It's basically just a big list of products that provide analytics and that you can use to do A/B testing.
I don't endorse any one over the other, it's just literally saving you the bother of trawling through Google.
So the next step is Fast.
So this might be when you're building a minimum viable project, product or you're you know, in a startup, or you're building a product that complements an existing suite of things.
It's usually really really fast design.
You're probably using a lot of existing patterns out there. You're just getting stuff out as quickly as possible. And you're going with your gut as far as design goes. Which is great, but there aren't that many opportunities at this point to use analytics.
You can, but you don't really have enough of a customer audience to get really really good feedback.
You can't really run A/B testing that well because you just don't have enough numbers to get the cohorts to run statistically significant A/B testing.
Any feedback that you do get at this point is really really valuable though, because your early adopters are the people that are gonna be referring your product in the future which is something that I'll touch on as well. And they're gonna be your advocates in the future. So really really look after them.
And again, just make sure that you're tracking absolutely everything that you should be.
So Feature F is probably the point where a lot of people are.
It's when you're working on products that are established perhaps, you've got more users. You may start having personas for those users that you can design against.
You might be in a bit of a team now.
So you might have a product manager, a design team and technical side, maybe working in a triad. And all of this tends to slow things down a little bit so the cadence of design can tend to slow down a little bit at this point.
Probably because you're always trying to push out something perfect.
So you're trying to design things to 100% which is fine, but it just makes things slower. So you can use data in this phase a lot more. You can definitely do a lot more user testing. If you can get people that use your product in and use them and test them, fantastic.
If not there's a lot of tools that allow you to do remote user testing.
Validation through user testing early on is really really useful.
Get some great feedback on how your design thinking is going.
There are products like Hotjar or something similar which allow you to record people using your product without them knowing, which ethically I don't know if that's good or bad but I really like this because as far as user testing goes from a qualitative point of view it's completely unbiased, So people are using your products or you're testing them in a lab there may be a little bit of pressure on them to perform or something, but something like Hotjar, you can just literally go and watch a recording of their mouse moving around and you get a really good feel for what they're doing in a completely unbiased way. And you've still got the water cooler.
So you can still shop your ideas around the office and get people to give feedback.
So my fourth F, facts, which I'm gonna spend a little bit of time on and give you an example.
Facts is when we get into the nitty gritty of actually using the data and using statistics. So this is the General Bivariate Normal - Density (Matrix Notation).
This is an equation used in statistical analysis and you don't need to worry about any of that which is awesome, 'cause I'm not really very good on the whole maths thing.
Any of those analytical packages that I talked about earlier they'll do most of the hard work for you so if you're using them and you're running tests and you just wanna find out how people are using your products they can give you the numbers.
So you don't have to do a lot of hard mathematical work. If you are running your own analytical service then make sure you've got a good scientist. They're amazing people and they can do incredible things with numbers that just blows my mind.
And at this point you wanna be defining some funnels so a funnel is basically, it's a kind of way of showing how people drop off when they use your product. There's lot of different funnels out there. Your marketing team might use the pirate metrics AARRR. (he laughs) So pirate metrics talk about how people come into your product, it's basically your onboarding funnel most of the time. So you're looking at acquisition when people sign up, you lose a couple of people here.
Then we move into activation where they might give you their email address which is great.
And start configuring things a bit perhaps. Then you lose a couple of people there.
And then this is the key one because you want people to come back.
So retention is usually where you find that you may have some problems.
Referrals I said earlier, that's a really good point. If you're at the referral point and people start talking about your product then you're onto success unless you've got stacks of venture capital backing you up you want revenue, so if you can get to here then fantastic, but you've probably lost 80% of the people along the way.
So I'm gonna use an example here again from Atlassian. So I worked on the growth team for JIRA software and JIRA software, if you're not aware of it, it's the sort of the developer kind of flavour of JIRA, and JIRA's a sort of workflow engine essentially. And one of the key tenets of JIRA software is the project, the concept of the project.
And so if you're coming in, you're working on issues in that workflow you wanna make sure that the project that you're using is correctly configured otherwise things can get a bit confusing.
And so we had a look at our onboarding funnel and we kinda noticed that, so what I've done, I've sort of mapped the funnel against the onboarding flow behind here.
And we looked the numbers and we saw that there was a significant drop off around the create project step of onboarding. And this is just a representation, there's no numbers in there, because, well I don't work there any more.
And so we sort of hypothesised around why this might be the case and we had a look at the screen that people were presented with when they were doing that process of creating a project and they kind of arrived here and there's just this whole heap of options. The cognitive load is quite great.
And again we hypothesised that people didn't really know what they were coming for, you know they might have been told that JIRA's where you do Agile and so they were coming to JIRA because you do Agile, but they didn't know what Scrum was or Kanban. They didn't really understand the processes. So we ran an experiment that just randomised all of these options.
To see if that made any difference.
And sure enough, if you took Scrum away and put it over here basic software numbers went up, Scrum software went down. Essentially the first option that was on the screen was the one that got selected most often.
So we kind of, we were zeroing in on the fact that people didn't really understand the project creation. So we came up with a hypothesis to run an experiment to see if that was the case.
So we basically said that by simplifying the project creation process we should get an increase in retention because people are getting the suitable project creator for them.
And obviously this, as I said earlier, was a numeric value, but I can't put that in.
So I'm just gonna take a little aside here and just talk a little bit about what growth design looks like just so you get a feel for it.
So I was working on the team as a designer with three, sometimes four developers.
And each of those developers wanted to get one experiment out a week.
So I had to be designing four experiments a week so it's really really fast.
So you're planning, you're designing, you're building, you're testing and you're iterating all the time and it's a constant flow.
But you can't design things to 100%, you need to kind of get them to 80%, something like that. You push it out, you test it, if the results come back reasonably good you can then iterate, change things, tweak things, do it again.
You need to make sure that you're constantly having meetings with other people who are doing experiments as well 'cause you wanna make sure that you're not treading on each other's toes. You wanna make sure you're not building something that's been built before.
Unless you intend to.
When you're actually designing the experiment the copy that you use is really really important. If the experiment comes back successful then don't change the copy because otherwise you're gonna nullify your own results. And the same can be said for the basic feel and the experience of the experiment.
So you know, if there are any interactions or anything like that you wanna make sure that you don't change the animations or anything because again you're nullifying the result. You can change them, but run the experiment again. And again, things like microinteractions, as was touched on earlier on or yesterday I think.
If you get a successful little button that wiggles and people click on it and that's fantastic make sure that you put down guidelines about implementing that.
Because you don't want five little buttons jiggling on the same page because that experience isn't so good.
So what did we do to change that project creation from that horrible mess of options? We basically just converted it into a sort of natural language survey which just took a lot of the load away from somebody that was coming in.
So by asking them whether their team was new to Agile methodologies or whether they were experienced in them and the sort of work that they were working on. Were they working on features or were they working on bugs? Do they rely on a tight schedule or a loose schedule. We could then weight the answers of these three questions and then provide them with a suitable project for their team so in this case a Scrum project would be the best option for them.
They put their project name in and away they go and this was great.
It resulted in a you know, a significant increase in retention so it was a successful experiment.
But, and there's always a but, most experiments that you run will be inconclusive. They won't shift the needle one way or the other but the thing to learn here is that there are no failures in experimentation. You know when I hear this sort of fail-fast methodo- you know, terminology, it's not the case 'cause you're always learning.
Every single experiment you run you learn from regardless of whether it's successful or inconclusive. So there were five Fs, this is my fifth F.
Now this is more about sort of pushing things forward. So this is where you can start to play around with new patterns and new experiences.
Don't be afraid to push boundaries when you're experimenting and trying things but to deliberately push out something new and interesting kind of in the knowledge that it's gonna make the experience worse for your customers is really not cool. That's not a good thing at all.
That's not how experimentation should be.
It should be about making the experience better. Always making the experience better.
Always validate new patterns with qualitative testing first 'cause it just gives you a really really quick feel for it. But having said that, even if the results come back from that not the way that you intended as a designer that doesn't necessarily mean that it's a bad thing. Sometimes your gut is worth trusting and push on with it. Again, just make sure that you're not making the experience deliberately worse.
But trust your gut.
Every established pattern had to start somewhere. And that's the thing to remember.
If you push something out and that experiment is successful and it becomes something that you know, your company wants to use elsewhere, by all means roll it back into your pattern library. Make it part of your design system.
So what have we looked at? We've had a look at the metrics you can use so baking your analytics, use qualitative testing. Use prototypes and define some funnels 'cause they're a really really god way to see where you're getting drop offs and get some of those low-hanging fruit that you can work on.
When you're designing experiments, just remember that it's really really fast. You need to have a hypothesis and use your existing patterns in your design system 'cause it speeds up development really really quickly. And also use things, if you're designing using Sketch or something like that and you're sitting with the developers you know, using products like Zeplin or Envision or something where you can hand over your designs to the developers really quickly and they can get actual pixel values and colour values and stuff.
It just speeds up the whole process.
When you're building them, build experiments to sort of about 80% there, test, iterate, do the final 20% if it's successful and you're gonna push it out as a feature.
That's when you can refine things.
When you're running experiments, just expect most of them to be inconclusive. You know you can't be disappointed about this because most of them won't be what you're expecting. But there are no failures and you're learning from everything all the time.
Really really important to sit with the dev team. Sit with the product team and just have your little group in the corner and just involve everybody in the process. That's really really important.
Spar your ideas with other designers if you can and always be aware of the bigger picture because you don't wanna make sure or you want to make sure that you're not treading on other people's toes. And maybe you can discover your own hamburger. So this was from a website that McDonald's put up last year that allowed you to create your own hamburger. And the nature of the web in 2016, 2017 meant that everybody abused it horrendously. And this is easily the most politically correct version of a burger that I could come across.
So perhaps if McDonald's had done a little bit more testing they may have thought twice about running that product. Thank you.
(applause) Big old list of tools.