The Net Promoter Score: ‘magic bullet’ or ‘snake oil’?

(lively synthesised music) - Hi everyone.

I'm Daniel.

I've been a Product Manager for over 16 years and I'm not sure if I'm a scientist PM but I do love science.

I respect the scientific method and I truly wanna make decisions based on data over gut feel.

Since I started as a PM, I've seen this more and more.

It has increasingly become the default instrument for seeing whether any of us were on the right track to achieve that thing we're aiming for.

Creating and maximising value.

It seems almost too good to be true but maybe it is true.

Many very smart, highly paid, talented people all around the world, have adopted NPS as a primary measure for the wellbeing of their businesses, their brands, their products.

So they couldn't all be wrong, could they? Maybe it is the magic bullet that designers, Product Managers and the C-suite can all rally around and focus on to truly understand whether they are designing and building the product, services and brands the way they should be designed and built.

However, a little while back as I looked deep into my soul, and I just felt like calling bullshit on the whole thing. Nevertheless, I did my very best to suppress my own cognitive biases and embarked on a personal journey of discovery to learn more.

I started by looking at the history of the Net Promoter Score, and then investigated how useful a tool it should be for someone like us, Product Managers. This is Fred Reichheld.

And he is not a lightweight.

Essentially, he built his very impressive career on the subject of loyalty, mainly customer loyalty.

He's been working for Bain forever.

And as well as consulting, he's written a number of really successful books and articles about the subject.

His model is fundamentally that fostering loyalty is supremely important.

Because it is one of the most cost-effective ways of promoting growth.

The way I read it is, if you deliver great products and experiences, customers will be more loyal.

And then, on the one hand, they are more likely to act as ambassadors, driving new adoption; and on the other hand, more likely to be retained as ongoing customers. So firstly, increased word of mouth means that businesses can expect higher rates of trial and adoption, which reduces their customer acquisition cost. Fantastic.

On the other side, increased retention leads to increased purchases, which then translates to increased lifetime value of a customer.

So it makes a lot of intuitive sense, right? And you end up with this virtuous cycle, acting as a multiplier for your marketing and your product investment spend.

All of this eventually leading to company growth. So here's the thing.

Our friend Fred wrote an article published in the Harvard Business Review back in 2003 called "The One Number You Need to Grow".

And it's not a bad article.

In a nutshell it says that Reichheld had already established that companies with a heavier focus on fostering and measuring customer loyalty were more likely to be successful.

And the hypothesis he wanted to test was that existing methods of measuring customer loyalty were just over-engineered, over-blown, there were too many of them, and they were not as indicative of company growth as they should be.

He ran his own study and his findings surprised even him. He found that a single question exhibited the highest correlation against actual customer purchase and referral behaviour, which he did his best to measure.

And that question? There is no price.

"How likely is it that you would recommend "company X to a friend or colleague?" The scale that he used was, zero for "Not at all likely", 10, "Extremely likely", and five is "Neutral".

Okay, great.

He also wanted to see if there was some correlation between responses to that question and the growth of the company.

So he found what he thought was a pretty convincing correlation, between growth and a single number derived by splitting out customers that are giving you either a nine or 10, Promoters; and those who gave you a six or below, Detractors. Sevens or eights are treated as Passives.

The idea is they're happy enough, but not enough to actually go out and advocate for you or be fiercely loyal.

He took the percentage of Promoters, subtracted the Detractors, and called that calculation the Net Promoter Score. And ta-da! We have a single question and a simple calculation, NSP, that is a great proxy indicator for loyalty. And therefore, implies future company growth. NPS went gangbusters.

Fred wrote a book and CEOs, execs and consultants all around the world went NPS crazy.

It was being measured everywhere and it took on a kind of cult status.

So all that seems great.

But is this actually good science? Well.

(audience laughing) Well it feels like science, right? I mean we use words like correlation and calculation and causation.

There's probably some spreadsheet work in there somewhere, so sure, why not? But does it stand up to scrutiny? Well even our friend Fred in that groundbreaking article, said that NPS was, quote, "simply irrelevant in some industries", with him specifically calling out, I kid you not, database software and computer systems as a case in point. It was also not predictive at all when a company was operating in a monopoly or near monopoly conditions.

It's also worth noting that Reichheld looked at past growth when he did his research rather than looking at future growth.

So NPS was actually modelled on predicting the past. And how good is the correlation? Well others have tried to replicate Reichheld's study and came to the conclusion that, yes, there is some correlation between growth and NPS, but it's actually not surprisingly a better indicator of past or concurrent growth than it is of future growth.

And according to one study, it actually explained only about 38% of future growth. It's also worth noting that NPS is quite volatile. So a small shift in average sentiment can actually result in huge changes in NPS. Because if enough customers crossed those arbitrary, or somewhat arbitrary thresholds, it can change the NPS significantly.

And lastly, there is a lot of variability in the underlying survey data that we don't see when we look at the NPS, and we distil it down to a single score.

Let's talk about that for a minute.

So here are two completely made-up, 'cause I made them up, but plausible NPS profiles.

And as Product Managers, if we assume that we are all true, devoted NPS believers, what do we do with these profiles? Well Company A tells me that we have a big problem, right? A significant segment of our customers, straight up, hate us.

Another segment loves us.

Who are they? We don't know.

Why haven't the haters left us yet? We also don't know.

How do we make the haters like us more? We really don't know.

One thing we might glean from this is that it's probably worth cutting those haters loose if we can. They're probably saying bad things about us to their friends and colleagues.

They are probably also expensive for us to service and support.

And we probably wouldn't even bother trying to convert them, because even if we on average shift those haters up four points in the scale, they are still going to be detractors according to the NPS. Conversely, if we look at Company B, we say it's probably not too unhealthy, but there is some room for improvement.

There are very few actual haters, but still a good few of what we call NPS detractors. Now if we could just shift a few of those sixes to sevens, and a few of those eights to nines, we could see a significant shift in NPS.

Now, of course, the NPS for these two profiles is identical and it is zero.

So Company A might be, say, a monopoly like a telco, and the haters are regional customers who have a truly excreable experience with their internet connection.

And those on the other side are people who are in love with their new 5G connectivity. Company B has more potential for improvement because winning over a six is far more likely than a zero, even in the medium term.

And those sixes are also far less likely to be assassinating us on social media.

Is it being applied scientifically? Sometimes, but mostly, hell, no.

Here are some examples.

So people are still collecting NPS data based on single touchpoints or small product features.

Some call it ENPS, where the E stands for episode; as opposed to ENPS, where the E stands for employee.

Don't get me started.

(audience laughing) So based on what we've already seen, it makes very little sense to try to measure loyalty based on a single interaction.

Now some are using it in an industry where it makes absolutely no sense whatsoever. Here is an example of a local council in New Zealand implementing NPS.

Now I mean good on them for trying to measure the satisfaction of their stakeholders.

They wanna be held accountable, they wanna do a good job.

But are people ever really going to be loyal to their local council? Can they just use another one if they don't like it? And what does growth even mean for a local council? The most insidious though is the fact that many people are actually incentivized to improve NPS because share prices are linked to future growth. So if we can show that our NPS has gone up, we get our bonuses.

This results in bastardised NPS collection. And here are a couple of examples.

Here is an NPS survey I received from a rideshare company who will remain nameless.

(audience laughing) Hands up who can see what's wrong with it? Yeah, okay.

(heavily sighing) It has been color-coded for my convenience. (audience laughing) If I thought they did a reasonable job here and I wanted to give them a solid eight out of 10, they are signalling to me that that's not quite good enough.

And only a nine will actually be considered good. And this is going to inflate their NPS.

Any science that NPS might have held on to here has gone out the window.

But it can get worse.

So here's an NPS survey from a well known telco, whose name I also won't mention, and this one is ingenious.

See if you can spot the flaw.

It's not there yet.

Let's run through the flow.

Was I able to perform my intended task today? Let's say yes.

Number two is a question that checks if the website stops you rotten people wasting valuable staff time.

It's valid, fair enough.

It's something I wanna collect.

Number three is the standard 11 point scale, looks good. And number four is that standard Qual question which, unless there is a very specific problem, will never be filled in.

But okay, that's fine.

Let's start the flow again.

This time let's say no, I wasn't able to perform my task today.

Oh look at that! (audience laughing) Now that is super helpful.

They have noticed that, hey here is a customer in distress. Let's see if we can turn him around.

And there's a great big button for me to click. Great customer service.

Oh and there's also a little link there that I could click to get back to the survey, but it doesn't look too appealing.

So I think you can spot the problem with this one. Now I'm not suggesting that this unnamed telco is up to anything nefarious here.

I'll assume they have the best intentions, right? And they saw this as an opportunity to make the customer experience better in real-time. Here is someone who has a problem, let's fix it. The bigger problem, however, is as a tool for measuring NPS, that it subtly filters out those who were not satisfied. Which introduces bias, which renders the whole damn thing invalid. So I'm not accusing anyone of bad intent here, but if NPS is a part of the executive incentive structure, than you are asking for something like the two examples I just showed you.

And, yes, in this one, movement in NPS is a part of their executive remuneration plan.

Here is a letter to their shareholders, in which they go into great detail to spell out how much a shift in NPS is going to be linked to their rem. Look, if I was them, I'd also feel very incentivized to move NPS in the right direction.

And I wouldn't be too focused on whether the science was up to scratch.

But what does Fred think of all this? Well he is astonished.

(audience laughing) And he calls bullshit on it too.

So after all this, is NPS a useful measure for Product Managers? Well, it might be better not measuring anything at all. And in some specific cases, it can be useful. But probably not on its own.

Some compelling qualities of NPS.

Look, a lot of us are already collecting heaps of NPS data, and it's a pretty good measure of customer loyalty. So let's look at some genuinely great features of the Net Promoter Score.

It is cheap to collect and analyse.

And you get a high response rate because it's only one question.

So you get that data back quickly.

It's easy to communicate to stakeholders.

And it is well accepted in the community.

So company boards and market analysts like it. So let's see what we can do to make the most out of it. Now remember, you're gauging loyalty here.

So measure it more at the brand or product level. You wanna look more at longer term trend than at daily, weekly or even monthly NPS.

If you can manage to employ a third party to do your NPS research for you, that's great.

And if they can be looking at a sample that's your customers as well as other customers, then bringing in comparative NPS against your competitors, it can be really helpful actually.

Don't go confusing it for a predictor of growth or for satisfaction with your product, because it is not that.

And the most useful aspect for Product Managers is always going to be that qualitative question that we pose right after it.

There may be some clues hidden in there, for what is going well, and what problems might be occurring.

So with all these doubt you've planted in my mind, Daniel, what the hell should I do? If your executive suite wants NPS, then give them NPS.

Seriously, this is not the hill to die on, okay? There may be compelling, commercial, if not product, reasons to collect this data and communicated to stakeholders.

So it's actually not a bad measure of customer loyalty. So do it.

But don't let it ruin the flow of your product, and make sure you don't bug your customers with it too much. Do your best please to make it scientific.

Also look at other simple measures that make sense. One I like, though it might be a bit old fashioned, is Customer Satisfaction or CSAT.

Something I'm sure many of you have already used. It's similar in style, but gives Product Managers more actionable insights, because it looks at whether the product is doing its job. So here we might ask customers, "How satisfied are you with X helping you to achieve Y?" This makes it clear that it's about the product, not the brand, and the specific job that the product is supposed to help with.

The Qual question can be phrased a bit more like, "What did you like about X, "and where do you think we could do better?" Now this helps with engagement, with the respondent feeling like there's potential for them to actually contribute to the product. The NPS version of this question, "What is the reason you gave this score?", actually forces users to justify the score they just gave and actually makes them feel defensive, right? And they don't wanna fill it in.

We can invite customers to actually collaborate with us here.

Then don't boil it down to a single number, if you can help it.

Look at the profile as a whole.

Look at the mean, the medium, the mode.

Work out the standard deviation.

Look at measures like total percent satisfied and total percent unsatisfied, and work out which of these actually align most with your goals and with your north star. Read as many of those Qual responses as you possibly can, catalogue them if you like or you know, put them in taxonomy. If you can't, run some semantic analysis over this. Build the word cloud, something that gives you some take on why, what's happening, might be happening? Try to find patterns that you can use to understand why what is going on, is going on. And most importantly, follow-up your surveys with actual conversations. People hate writing stuff in surveys, I'm sure you do. But they love talking about their problems. And sometimes, they're delights.

So go out there, listen to your customers to learn how to make your product better.

You'll get more mileage out of that, than the much lauded and often abused NPS.

Thank you.

I'm Daniel Kinal.

If what I've said today has stirred up some emotions in you, you can @ me here, or you can follow me on LinkedIn.

I am very approachable and not creepy at all. (audience laughing) Happy for you to reach out in any way you like. Thank you again very much.

(audience clapping) (lively synthesised music)