Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> We Can’t Reduce User Experience To A Single Number

Growth is a single number, and NPS is measuring growth, not UX.

I'm no big lover of NPS, but this analysis is awful! He claims it's bad because he doesn't understand it. That's not very scientific either.

I'm not sure NPS works well at all, but the idea behind it is obvious. It's a growth metric. The goal of NPS is to tell you how many new customers an existing customer will refer to you over the lifetime of their account.

This is just like population statistics. NPS is trying to measure your customer birth rate by asking how many customers are (or intend to be) pregnant. It's not an accident that there are only two thresholds, and it's wrong to conclude that these thresholds indicate a problem with the method.

If a customer recommends less than one new customer during the entire time they're with you, then you have a replacement rate less than 1, and you're losing customers over time. If you have a replacement rate between 1 and 2, and your customer lifetime is long (say, years) then you aren't growing fast. If your replacement rate is 2 or higher, and your customer lifetime is short (say, weeks) then you are growing more than 200% month over month virally, without the need for marketing.

What the people who designed NPS did, I am sure (meaning I'm speculating, but giving the strongest possible interpretation), is measure some responses and compare it to the number of actual referrals, then drew the lines where the referral rates cross from negative growth to neutral grow, and from neutral growth to positive growth. That's what I would do. And it seems plausible that people who give a score of 6 or less won't end up referring anyone, on average.

Sadly, the article doesn't conclude with any real alternatives for measuring growth. Since NPS is an indirect growth metric, the better answer may be to simply measure your growth directly. That means understanding engagement and activity, not just counting how many accounts exist, but other than that, counting your active customers is a single number that will reliably tell you growth, and can't be gamed by your customers -- it can only be gamed by yourself and your team.



> Growth is a single number, and NPS is measuring growth, not UX.

Connect the dots for me on how NPS measures growth. Where does it tie to growth at all?

> NPS is trying to measure your customer birth rate by asking how many customers are (or intend to be) pregnant.

Horrible analogy, but ok. I'd say, if there's any equivalent, it's asking how many people think they are likely they might get pregnant ever.

> What the people who designed NPS did, I am sure (meaning I'm speculating, but giving the strongest possible interpretation), is measure some responses and compare it to the number of actual referrals, then drew the lines where the referral rates cross from negative growth to neutral grow, and from neutral growth to positive growth.

They didn't do anything like that.

> And it seems plausible that people who give a score of 6 or less won't end up referring anyone, on average.

It does seem plausible. It isn't validated by any science, but it's certainly plausible. (Like the earth is plausibly flat.)

> Since NPS is an indirect growth metric, the better answer may be to simply measure your growth directly.

Agreed!


>> What the people who designed NPS did...

> They didn't do anything like that.

Now I'm really confused by your statements. I just read the link to the original source that you posted on hbr.org. What Reichheld described is exactly what I said above, he correlated survey responses against actual growth rates, and drew the lines between negative and positive growth rates. Not only that, he asked the question multiple different ways, and found out which question statistically landed the most accurate answers.

Why are you claiming they didn't do that? Are you saying the article is lying about the data they used to come up with NPS?


I'm not defending NPS. But your first and biggest argument in the article is unscientific and anti-statistical. You're making an emotional case that it looks weird because there are thresholds. You said "For some reason, NPS thinks that a 6 should be equal to a 0." and "Make that data set to be all nines: 9, 9, 9, 9, 9, 9, 9, 9, 9, and 9. The average is 9. And miraculously, NPS is 100!" Your reasoning here is faulty. You threw in sarcastic irrelevant comments about bonuses to make the idea of getting your NPS score wrong feel like it'll do damage.

Instead of investigating the possibly legitimate reason NPS people might be doing this, you put up a straw man argument about all respondents giving the same score. The likelihood of all respondents in a large survey all giving 8's is very, very close to zero. The likelihood of your NPS score suddenly flipping from 0 to 100 is very, very close to zero.

So you got my analogy and suggested an alternative, but you still don't see how probability of referral (or birth) is an indicator of business (or population) growth? You do seem to get it, so I don't understand what you're missing. I'm not sure how to (or if I need to) explain it better.

Polling a bunch of people how likely they are to refer a friend is like sampling the derivative of the growth function you want to estimate. If everyone responds accurately and tells the truth, and they refer people at the rate they said they would, you can use the data to predict your growth.

The fact that NPS puts the negative growth line at 60% says, to me, that they concluded that people inflate their self-reporting referral probabilities.

There is a mapping between what people report, and what they do. NPS might have the mapping wrong, but there is a mapping. I don't expect the NPS mapping to be very accurate, but if it's wrong I'd like to hear why. You haven't explained why it's wrong because you don't seem to understand why it might be right.

> Horrible analogy, but ok. I'd say, if there's any equivalent, it's asking how many people think they are likely they might get pregnant ever.

I don't understand what you're arguing (or why), you're splitting a very fine hair here, the difference between what you suggested and what I said is subtle at best. The NPS question is how probable are you to recommend this service to a friend. Someone with a low probability is likely to recommend to 0 friends. Someone with a (self reported) medium probability may be likely to refer 1 friend. Someone with a high probability may be likely to recommend 5 friends.

Asking a yes/no how likely one is to ever get pregnant would be a worse proxy for population growth than asking how many pregnancies you expect in your lifetime. The NPS question doesn't exactly ask either of those, it can be interpreted either way.

> They didn't do anything like that.

So what did they do? Your post ignores that question and argues it's purely bogus. I don't even know what they did, and I don't buy that NPS is pure fiction with nothing at all to back it. I totally would buy that the NPS scale was based on a small sample, and that it doesn't fit many companies very well.

> Like the earth is plausibly flat.

Not sure I get where the snark here is coming from. There exists an average response to this survey that is between 1 and 10 where below that number, statistically people will not refer anyone. What is that number? Why does 6 seem as plausible to you as the earth being flat?


You're giving the people behind this metric A LOT of credit if you think they picked a precise cut off based on historical evidence, rather than "hmm 2 and then another 2 sounds good".


I think it's dangerous to fail give any benefit of the doubt at all, and to assume they pulled a number completely out of their asses.

I am giving them the benefit of the doubt, I assume it was based on more than a guess.

Now, instead of assuming, I'm going to look it up.

It turns out they described what they did, and it seems that it was based on some actual data:

"So what would be a useful metric for gauging customer loyalty? To find out, I needed to do something rarely undertaken with customer surveys: Match survey responses from individual customers to their actual behavior—repeat purchases and referral patterns—over time. I sought the assistance of Satmetrix, a company that develops software to gather and analyze real-time customer feedback—and on whose board of directors I serve. Teams from Bain also helped with the project.

"We started with the roughly 20 questions on the Loyalty Acid Test, a survey that I designed four years ago with Bain colleagues, which does a pretty good job of establishing the state of relations between a company and its customers. (The complete test can be found at http://www. loyaltyrules.com/loyaltyrules/acid_test_customer.html.) We administered the test to thousands of customers recruited from public lists in six industries: financial services, cable and telephony, personal computers, e-commerce, auto insurance, and Internet service providers.

"We then obtained a purchase history for each person surveyed and asked those people to name specific instances in which they had referred someone else to the company in question. When this information wasn’t immediately available, we waited six to 12 months and gathered information on subsequent purchases and referrals from those individuals. With information from more than 4,000 customers, we were able to build 14 case studies—that is, cases in which we had sufficient sample sizes to measure the link between survey responses of individual customers of a company and those individuals’ actual referral and purchase behavior.

"The data allowed us to determine which survey questions had the strongest statistical correlation with repeat purchases or referrals. We hoped that we would find at least one question for each industry that effectively predicted such behaviors, which can drive growth. We found something more: One question was best for most industries. “How likely is it that you would recommend [company X] to a friend or colleague?” ranked first or second in 11 of the 14 cases studies. And in two of the three other cases, “would recommend” ranked so close behind the top two predictors that the surveys would be nearly as accurate by relying on results of this single question."

https://hbr.org/2003/12/the-one-number-you-need-to-grow




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: