The Mistake of Netting it Out – The Passives, Skewed Scale and Instability

We started out with some basic questions on the Net Promoter Score (NPS) metric. In the second episode we established there is nothing special/superior/magical/analytical about using a 11 point scale that spans from 0 to 10. If you insist on using such a metric you might as well ask the respondents whether they will recommend, disapprove of or neutral about your product and you will not be ay worse of. That does not mean the question works but it does not do any worse than the 11 point scale.

If you insist on sticking to 11 point scale, let us look at the other questions I raised

  1. What happened to those who answered 7 or 8? (called Passives) – why are they not relevant?
  2. Why the unbalanced scale – why there are only two levels for Promoters and the Passives while there are 7 levels for Detractors?
  3. Why are we doing this percentage subtraction math – why not simply use the average recommend rating? Is that found to be bad predictor than the percentage math?

Note that because of the Net math  that is being used, that is subtracting Detractors from Promoters, it is possible to arrive at the same NPS score by more than one ways. Stated another way, any number of distributions of recommend rating by your respondents can result in the same NPS score.  A score of 10 can be either because there are 50% Promoters, 40% Detractors or 10% promoters and no detractors.

While we have not discussed yet the predictability of business performance based on NPS, let us assume that the metric is a predictor. If we treat that as true then given any business and its NPS score you should be able to predict its future performance. What NPS school of thought says is how the business got to its current point of NPS does not matter.  So you could be a business going down from NPS 70 to NPS 30 or a business with NPS -30 moving to NPS 30. According to them these two businesses are same because their current NPS is 30.

You can see the contraction in their own logic. On one hand they say current NPS is predictor of future profitability but they end up predicting same future for two companies on two different trajectories.

So when you try to compare two businesses with same NPS, they could have arrived there by two different ways implying that it is meaningless to compare businesses based on their NPS score. So any graph you see showing distribution of NPS score vs. company performance is simply wrong because their baseline is wrong.

The problem stems from the net math and their decision to ignore the effect of Passives completely. It is the presence of Passives that can be assigned any number from 0 to 100 that results in the scale instability and eventually to the error of treating all companies at same NPS point the same way.

So why go with any kind of Net metric?