(upbeat music) - So, here's three topics that people love to argue about. People really like to talk about the definitions and what's right and what's wrong and when you should use them and when you shouldn't. I don't really care about all that 'cause it's really boring.
So I think with the 20 minutes that we have I'm gonna try and give you the cliff notes and just state my beliefs about what I think these are and how they fit together.
And then get into actually how we can do some of this inside organisations so that we can better work as teams together. So let's start with design thinking, this means a lot of things to a lot of different people. There's the hexagons, the diamonds, the squiggly line of uncertainty, the infinity loops.
But actually, design thinking is none of those, we should meet Carissa Carter, we should listen to her, I think what she has to say about this is right on. She says that process and tools are just there to help us get started, but actually design is about practising and developing our ability, and it's in the ability that helps to be better designers. She thinks of these as some of the core abilities that designers have, and we can bring those to bear in the way that we determine real problems, that are meaningful, worthwhile problems to solve, searching for solutions, considering many different options of how we might solve it, and then finally, how we converge on the solution that we want to go forward with.
I think that's a really good way to think about design thinking and how we can bring it into the work that we do.
So then let's think a little bit about what lean is and where it comes from and why it's helpful.
So the summary statement is that lean is about optimising systems at work. So you probably know lean comes out of manufacturing mostly. I really find lean people to be a bit weird. They talk about these things, eliminating waste, theory of constraints, total quality management, they get excited about Qs, these are not my people.
All those things matter and they're helpful particularly, in a large organisation context where we're really trying to do things like optimise that value flows through an organisation and how do we go about empowering people, and building quality, all these ideas that come out of lean. But I think the unfortunate thing is that's where a lot of the conversations starts, stops rather, in the lean community, and that's a shame because I think that this is probably the most powerful and useful thing about lean from a product development point of view.
So we can thing about lean as being about scientific thinking, it's about exploring uncertainty, and it's about how we learn by doing and adapt through that process of trying things and seeing what the result is.
So how about agile then? Well, here's some principles of agile, I think it's ultimately about optimising software delivery, and I'm going to take a pretty safe bet that in the room, you either notice stuff or two, you're really sick of talking about it. So let's move on from here, I think agile, fundamentally, it came out of doing software better and I think that's a useful frame to apply to how we think about agile and where it fits. There's loads of people that disagree with me on that, lot of people that say, agile is more than just software, cool, happy to disagree, I think it's more helpful to start to talk about how that fits and how we can do stuff together.
So I find that too often the question is people want to ask, is it lean or agile? But the answer I think is and.
So it's lean, design thinking and agile together that helps us to achieve good outcomes.
And responding to change over following a plan, I think that's the key thing in agile, for product development teams that we should be concerned with.
So, how do we start to bring it all together? I think that design thinking is about how we explore and solve problems.
I think that lean is our framework for testing our beliefs and learning our way to the right outcomes and agile is how we respond to change in software. We can deeper into this and if you want to go down that rabbit hole, there's lots of rabbit holes to go down to.
You can start by reading the 70 page book that John mentioned that I published, it's me trying to understand it and make sense of it for others.
But, let's not go down that rabbit hole right now, I'd rather look at, I'd rather talk about how we do it, and so we can ask the question, how? How do we bring...
How do we do design thinking, lean and agile together? And I wanted to start by saying that it's about mindsets not processes, so you're not going to see any playbooks.
I don't believe there's a right way, I don't believe there's one playbook or a set of steps or a process that helps organisations that do this in any meaningful way.
But instead I can show you this mud map of how things start to fit together, and specifically talking to the intersections. So, I think about lean and design thinking together is where we start to understand where we're at today. And where we want to get to tomorrow.
It's how we pursue success through exploration and experimentation, and how we do validated learning. You can see how some of this stuff comes together, there's core things about design thinking, exploring problems, starting to find solutions. How do we take a direction and set a course? And then how do we use the lean, some of the lean principles to help us learn and experiment and make sure we're aligned to the things that matter inside the team that we're working in or the organisation or the customer that we're deliver value to.
When I think about agile and lean, I think of this as where strategy meets execution. So lean's our framework for testing our beliefs and exploring and refining our strategy, but it only works if we're able to respond the what we learn in real time and that where agile comes in, this ability to respond to change all the time, continuously.
And then finally design thinking and agile, I like to think of this as a collaboration in realistic solutions.
So software's the medium and it's engineers and designers that are working together to create solutions that deliver value.
And I think that is for me, a pretty high level way of thinking about the different things that are going on inside, particularly in larger organisations, where I've spent, let's be honest, all of my career.
And so it's a little bit different to some of the conversations that comes out of lean startup where you might be one very small team, not with the same constraints as some larger organisations. Cool, so there's my beliefs, that's how I think it all fits together.
Now let's talk about what this looks like at ground level. Ways to think about doing stuff.
So I think that continuous learning is the new competitive advantage, and I think it's because we live in an increasingly complex world.
I don't really know if the world was more complex 100 years ago because I'm not 100 but I think this matters because I know for myself, most of the ideas that I have are terrible, and most of the time I'm wrong about how they're going to play out in reality, and I can't remember who it was yesterday, I think it was Nicole, talking about this, where ten years ago we used to hedge our bets and hope for the best, do a lot of stuff in the lab, and we'd hope that it was gonna work, but really ultimately we didn't know until it was shipped into market and customers were interacting with it.
And I think some other things have changed which kind of contribute to this.
So we start to think about complexity.
I think that complexity has increased because of the expectations from decades of innovation that have happened recently.
I think also because of the possibilities of today's technology, things just feel more complicated, things like machine learning algorithms that are always evolving, and have a mind of their own in a way.
This creates new challenges, but I also think that the cost of creating software has reduced over time, and that's because of things like platform as a service and cloud and patterns and lean and agile, and ways of working, and all that good stuff. But I think when you look at these two things together, and you consider where we are now and how I was then, I think that this is a good case for moving toward experimenting in real time with working software.
It's not possible to do all the time but I think of back then, maybe 10, 15 years ago, UX, was a huge competitive advantage, and organisations that were exploiting that capability to learn faster and cheaper than they could learn in a software team that's delivering working products and features, were outperforming their competitors by a significant margin.
I think the thing that's changed is that now what we see a lot is high performing teams are able to experiment faster and often cheaper, than doing that in the lab, and I think it's a higher quality of learning because you're doing it in a real situation, there's nothing contrived about it, there's no lab conditions, there's none of the things that you have to deal with in that way.
And so, I think a lot of the best product teams in the world are exploiting this and they're taking this approach and I think is that that is the new competitive advantage in today's market.
So what does it look like? I really like Jeff Patton's Dual Track Development Model, this is an agile thing, and the fundamentals are that there are two cadences happening here.
There is a learning cadence, that's coming out of discovery, and there is a development cadence that is coming out of delivery.
But the thing is that these are not separate teams, working in a waterfall way with phases along the way. I think for a long time this has been, you might have a discovery team and a development team, and they're somewhat connected, but a little bit separated.
I think a great goal here is for this to be just one team and you do discovery in the same backlog as you do delivery and developers usually throw things at me when I say that. But it's a good aspiration because you're all striving toward the same thing and it's learning what's gonna work and getting the confidence to invest in the next most expensive experiment to learn the next most valuable thing, it's really what product development is about I think. Here's the lean version it, this is Mike Rother's Improvement Kata, you've probably seen this before, I think of this as the way that we can practise the movements of scientific thinking.
Mike's basically saying how to do it is to set a challenge, start from where you're at, number three, identify the next target condition, where you want to get to next, and then four, run experiments at what he describes is the threshold of knowledge, to find your way forward.
I think that's at the heart of rapid experimentation teams are doing and about trying to identify the thing you need to learn and finding a way to go ahead and learn it. My pal, Melissa Perri has done an amazing job of reframing this for us to be more specific for product development and I think her model, which is built off Mike Rother's Improvement Kata, just brings it into a little bit more context of what we're doing.
Understand the direction the company's going in, analyse the current state, set the next goal, and then figure out what stage of the product development lifecycle you are, and selecting the right tools to move forward. She's awesome.
Okay, what does it look like on a wall.
So, here's the discovery in development that can run from one wall.
And if we start out with our Aspirational Customer Journey or our goal or the direction that we're moving in, we can move into what are our beliefs and assumptions, our hypothesis about how we're going to win. What do we then need to go ahead and learn, so those are our Design Experiments.
On this wall you can see that there's a range of experiments happening at different levels of fidelity, so you can see these in relatively high fidelity. design in there, those have been through a few iterations at the early stage of the customer journey. Where there's some stuff over on the right here that's way more sketchy, and we're talking about pencil sketches and trying to figure out what we're doing.
The important thing is though, is that as we start to get confidence, by testing our beliefs, we can then start to feed that into the stuff that we're actually building out in the development theme.
And for us this was one wall for an entire team, this is the one place that everybody came together, whatever the cadence was, a few times a week, whatever it was to talk about where we're going, what we've learned so far, what we're going to do next, what we're reading to push into development, and then importantly, once we've shipped our feature, what are we learning still? What data are we getting back? And how can we use that to keep refining? And so that feeds in a little bit to designing the right experiments to learn the right things. This sounds so simple and obvious, but it is so easy to get it wrong as well.
When we're talking about designing experiments, a really simple way to think about this is define your beliefs and assumptions is the first step, decide the most important thing that you need to learn to make the next decision, and then design the experiment that can deliver that learning.
Here's an example of us getting it super wrong. We were redesigning a product to make tax free shopping, refunds easier for people, and what we were really focused on the onboarding experience.
This is pretty elaborate, this is one of our product hypothesis, and we've got parametrics in there, we're measuring different things in prototype, and we're looking for different stuff that we're going to measure in the real product. And as a team, we were super smug, this is probably about four or five years ago, hypothesis driven development was a relatively new idea. And we thought we were great.
And we moved on, but the thing is that we had this really big dumb assumption, that acquisition was sorted because we had existing customers.
So we thought this was fine and we weren't going to need to worry about it, and that's the reason that we were focused over here, on activation and onboarding.
But we were way wrong, we were super wrong. What actually happened was that we designed a better than average onboarding experience, we've invested a lot of time, it was good, and we did the big app release, and nobody knew about it, and nobody cared about it, and nobody used it. It was a classic zombie app move.
And it was an expensive and humbling lesson in what we didn't do, was think properly about what is it that we're trying learn? How is that going to help us make the next decision? And what is the right experiment to do, that's going to help us go forward.
And so after learning that lesson, I think that a simple way to break this down to think about experiments that you can run. You can do things around validating the problem, so this is things, like things like observation and research, customer interviews, ethnographic things, qualitative type research.
That's different to evaluating a solution, this is the world that you all probably know pretty well, it's prototypes, it's taking your solutions and sketches and concepts to customers.
But the one that probably gets the least amount of attention, at least, in my circles, is how do we go about experimenting on the demand side? What are some marketing things that we can do? Or even understand, like we did with our tax free shopping, we had really high confidence that there was a meaningful problem we're solving 'cause we'd been out and sees customers struggle with it and we'd see the pain points, and we'd mapped it, and we understood how to solve it. We did a better than average job of solving that problem, and we were confident in our solution, the bit that we missed was is there demand? Do customers care? How do we reach them? How do we get them to this product in the first place? And it's really easy to ship a great product that nobody cares about or a great product that nobody knows about, and just thinking through, have you covered your bases on those areas is important to do.
There's a few ways to do that, in my short book, probably 20 maybe methods across those three groups.
But actually this is the book you should buy. This is forthcoming, David Bland and Alex Osterwalder, are bringing this out just now, and I think there's 50 odd methods in this which is coming out.
I'm not sure how many people will be familiar with David Blond, David Bland, but if you are interested in rapid experimentation or you're doing that in your teams, just go read his stuff, it's really good, he's got loads of experience and really good methods, and examples on how to get that stuff done, and do it well.
So another thing to think about where we're in experimentation world, is the idea of cost versus confidence, and so you know these methods right? We don't need to walk through the methods, but I think sometimes we put too much faith in certain things that we like doing.
Our team really likes doing prototypes, so we do prototypes.
Or we're really great at doing surveys, so we'll do surveys.
I guess the message I want to leave you with in this is about quality of insights, about quality of learning, different methods have a different fidelity of insight that you can get from them.
And so things that are pretty low confidence like a quantitative survey, it doesn't mean that they don't have a place, because they're also really cheap to run, right. So the important thing here is, how big is the decision that you're trying to make? Is it a decision that you can change easily? Or is it something that really matters, like the stakes are really high? And because of that, pick the thing that's gonna help you have confidence to the next thing. As an example, I probably wouldn't base an investment decision on the results on an online survey, but I might use an online survey to figure out the next area that I want to explore in terms of customer value and solutions and problems.
And so we can get into, if we're doing experiments, probably should measure stuff, that sounds really obvious as well, but yeah, it's really easy to forget about it. You get so caught up doing design sprints and there's like you know, you've done six weeks in a row, and there's Post-Its everywhere, you got 100 prototypes, and then you get to end of the week and present back to stakeholders that are understanding, trying to understand about whether they want to invest in continuing with this opportunity or not, and the conversation's really anecdotal, and it's usually inconclusive in my experience. You try something, you're looking for a particular result, you get to the end of the week, and the team's like, "Eh, I don't know," we didn't validate it, and we didn't get any certainty out of this so not sure, like what do we do next? I think some of that comes out of not being disciplined in the way that we set boundaries around experiments and also not being disciplined in the way that we think about measurement.
So, when you think about measurement, there's four questions to ask, can it be measured? Some people everything can be measured, I think some things are easier than others to measure. Does it inform a decision? So the thing that you're trying to measure, if it's not contributing to a decision you wanna make, why're you measuring it? But we do that a lot don't we? People talking about vanity metrics and so on, but I think if we just bring it back to measure the things that help us make decisions, it's a useful guideline for thinking about this. Will you know where you're finished? And is it aligned to the goals that you care about right now ? And so here's a typical example, this is a goal that any absolutely in e-commerce will have seen and worked toward.
Customers can easily find what they are looking for. And a typical way that a lot of teams would think about measurement, maybe you'd think about these things.
So we want there to be customer delight, great, super vague, I don't know how to measure that. If anyone's figured this out, I'd love to know, I'm not being factious, I really do wanna know, it's really hard.
Business people and stakeholders always want it, customer delight, really difficult to measure. Net Promoter Score, awesome, you can measure it, that's good, but it's really hard to tie it back to feature development, like it's too open to external factors, it's hard to know whether the thing you did had any impact on that, or more to the point, whether you're ready to stop investing in improvement because you reached the number.
We shipped some features, awesome, did it matter? Did it deliver value to the customer? So here's some examples of slightly better ones. If you start to think about what we talked about before, is it measurable, does it contribute to a decision, will we know when we're done.
Now we've gone something more meaningful, we can talk about knowing the conversation from a results page on a search engine to a product detail page increased by 3% from one month to the next, that tells you a lot, you can now have a conversation that says, "Yes, we're moving it in the right direction." Do we need to keep going with this, or is this enough for us to say, we can stop investing the teams' time in improving that and we can focus our attention elsewhere? You should all know your metrics, the strategy, that doesn't have to be that hard, just thinking about what are your goals, what are the measurements for the goals, and how does that relate back down the things that are happening in the team? And then just a useful template to get started, so, we can start with goals, and just get a little leg up here, here's four words we can use to get started on how to think about a goal.
Can it increase, improve, decrease or eliminate? And then what are we trying to do that to? It's the description of the measure, and that how can think about goal.
And then the measure for that, we'll know that we've, success is when we've moved from where we at today, to where we want to be tomorrow, by a particular day. And this is what that might look like for a team that's delivering products at the end of the fulfilment cycle if you like. Finally I want us to talk about solving next order problems. So what I mean by this is that, I'm sure this is the case for a lot of other people, it has been for me.
Problems have this way of unfolding on you, and so once you've solved it, at the person or practitioner level, and you feel like you've got mastery over a certain thing, and you're felling pretty comfortable, all of sudden then it becomes about the team, and how can you take the whole team with you, and how do you do this in a, what are the different constraints and challenges that are different at a team level to what they are at personal, practitioner level? And you start to figure out some of those things, and maybe get some confidence and you start to move in the right direction. And then all of a sudden, it becomes about, it becomes about the entire organisation, and there's this huge organism if you want, maybe there's millions, tens of millions of dollars being invested into a huge digital transformation, and you're trying to figure out, how do we solve those problems that we've never seen before? So that we can create the conditions at the team level to do the right things, so that the people that are at the individual level, can do what they need to do? And the hard thing about this is that there is no truth, there is no one single truth on how to do this. Let me explain with this metaphor.
So take a look at this picture and have a think about what it is, and what it's about, and the meaning of the photograph. It's not that interesting at this point, it's somebody that is sitting outside in the sun, probably not too much going on.
But if we step back and zoom out a little bit, now it's group of people, maybe they're friends, it looks like it's daylight, it looks like summer time, they're meeting at a water way, maybe they're having some lunch, somebody's been on push bike.
Starting to get a bit of a sense of what's going on now, and then let's step back a little bit more, and now it's September 11, and the meaning if this photograph has changed entirely and it's completely because of the distance the camera to the scene. And so, I think this is true inside organisations too, we have different people that are operating at different altitudes, and are focused on different elements of trying to deliver value to a customer at the end. And I think some ways that we can start to overcome that is just to get people together, and to start to get people in the same room and making space for the different perspectives and the different ways of making meaning, and trying to combine all of that together so that you can find a path forward, as a group. And that doesn't have to be that complicated either. This photograph is just showing a team of people that we were working with, and they're basically annotating a high level service blueprint about the customer, the value that their customers are drawing at different stages of the lifecycle, their interactions and touch points, the business systems, technology systems that are underpinning all of that, and the business processes and back end things that are happening, and then looking for opportunities to improve that, they're looking for opportunities to be able to move, to be able to sort things out across multiple layers of their organisation, whether it be product development, or process improvement, or something else, so that they can do better for their customers.
And so we're back to this, this idea of, we have to acknowledge there's a lot going on here, this isn't an easy challenge to move through, and consultants will probably come and they'll probably give you a playbook, and they'll probably tell you want to do and they'll probably tell you how to think, but they're wrong.
You have to figure this stuff out for yourself, because every organisation is different, the constraints that you're operating within and the conditions inside your organisation are different. The people inside your organisation is different, and the levels and the altitudes are different too. So all the way from team shifting features, to leaders deciding on they're going to respond to some market change, you've got to find ways to take some of the ideas from design thinking, lean and agile, and make sense of them for yourself, and figure out what works in the context of your organisation.
I'm not going to leave you having with that, there's one pattern that helps with this a little bit and that ties in somewhat with some things we heard yesterday from Anna Harrison, she was talking about Hum, and I think this model of seat and split, is a way to scale Hum in a sense.
I think a way to do this is to start with an exemplar team, and an exemplar team is usually a cross-functional group of people, from various parts of the organisation, you'll have some designers, some technical people, you might have a lawyer in there depending on what type of organisation you're from, there could be BAs.
And their job in life is to overcome all of the constraints that they're going to bump into and all of the pushback from different parts of the organisation who don't want to work this way because they don't like change, or they're not open minded, or all kinds of other reasons. For some of those challenges, they tend to be things like how funding is allocated, it tends to be around the need to manage risk and govern things in a way that makes people feel safe, they're all really good things, they're not wrong, that's just the condition inside the organisation.
But the idea is that the exemplar team gets to come together and work that stuff out, and figure out, okay, legal counsel needs us to have assurance around this certain thing, how are we going to give them that? And let's work out a pattern that works that works for everybody, that let's us work in a different way, but meet the needs of the other parts of the org.
So where that team gets some confidence and they start to hum, and they worked it out and they've got some patterns, you split them up.
And those people that have that experience can then take on some new people, and start to teach those patterns and things evolve. Negan mentioned yesterday about the strangler pattern, you can apply this to the organisation too, it takes time but if you start small with one team, and start to work things out, and then move out to the next team, and scale that, and so on and so forth, inside a large organisation over time, you can transform the way that people are doing work for the better and it all began with this one exemplar team, not taking a playbook but taking a mindsets and principles way of thinking and overcoming the challenges that are in front of them.
So to leave you with a summary.
I think continuous learning is the new competitive advantage.
Here are some ways to do that, be stubborn on the vision, but flexible on the details, so create the conditions for teams to learn their way forward.
Have a strong vision, but don't dictate the details, focus on confidence over certainty.
Design the right experiments, to learn the right things.
So we can do this by defining our beliefs and assumptions, deciding the most important thing to learn, and design experiments that can deliver that learning. Number three, measure things that matter.
So can it be measured? Is it going to inform a decision? Do you know when you're gonna be done? And is it aligned to your goals? And finally, try to solve some next-order problems. Zoom out, work together at the intersections. Try some new things, adopt the things that do work, and lean from the things that don't.
Thanks very much for your time.
(audience applause) (upbeat music)