NPS
August 15, 2014

When using Customer Effort Score (CES), remember the NPS wars

By Paul Laughlin

Many commentators have recently debated the relative merits of Customer Effort Score (CES) verses Net Promoter Score (NPS).

As a leader who remembers the controversy that surrounded NPS when it first came to dominance, the parallels are concerning. I still recall the effort wasted trying to win the battle to point out the flaws in NPS and lack of academic evidence, whilst in fact I was looking a gift horse in the mouth (I’ll explain that later).

I would caution anyone currently worrying about whether or not CES is the “best metric” to remember the lessons that should have been learnt from “NPS wars“.

NPS & the battle for top metric

For those not so close to the topic of customer experience metrics, although there any many different metrics that could be used to measure the experience your customers’ receive, three dominate the industry.

They are Customer Satisfaction (CSat), NPS and now CES. These are not equivalent metrics, as they measure slightly different things, but are all reporting on ratings given by customers to a single question. 

Satisfaction captures emotional feeling about interaction with the organisation (usually on a 5 point scale).

NPS captures an attitude following that interaction, i.e. likelihood to recommend, against 0-10 scale with detractors (0-6) subtracted from promoters (9-10) to give a net score.

CES returns to attitude about the interaction, but rather than asking about satisfaction it seeks to capture how much effort the customer had to put in to achieve what they wanted/needed (again on a 5 point scale).

Imperfect metrics and the real world

The reality, from my experience (excuse the pun), is that none of these metrics is perfect. Each has dangers of misrepresentation or simplification.

I agree with Prof. Moira Clark of Henley Centre of Customer Management. When we discussed this, we agreed that ideally all three would be captured by an organisation.

This is because satisfaction, likelihood-to-recommend & effort-required are different ‘lenses‘ through which to study what you are getting right or wrong for your customers. However, that utopia may not be possible for all organisations, depending on volume of transactions and your capability to randomly vary metrics captured and order of asking.

It’s what you do with it that counts

But my main learning point from the ‘NPS wars‘ experience over a couple of years, is the metric is not the most important thing here.

As the old saying goes, “it’s what you do with it that counts“. After NPS won the war and began to be a required balanced scorecard metric for most CEOs, I learnt that this was not a defeat but rather a ‘gift horse‘, as I referred to earlier.

Because NPS had succeeded in capturing the imagination of CEOs, there was funding available to capture learning from this metric more robustly than was previously done for CSat.

So, over a year or so, I came to really value the NPS programme we implemented. This was mainly because of its granularity (by product & touchpoint) and the “driver questions” that we captured immediately afterwards.

Together these provided a richer understanding of what was good or bad in the interaction, enabled prompt response to individual customers & targeted action to implement systemic improvements.

Beyond NPS, build on what has been learnt

Now we appear to be at a similar point with CES and I want to caution about being drawn into another ‘metric wars‘.

There are certainly things that can be improved about the way the proposed question is framed (I have found it more useful to reword and capture “how easy was it to…” or “how much effort did you need to put into…“).

However, as I hope we all learned with NPS, I would encourage organisations to focus instead on how you implement any CES programme (or enhance your existing NPS programme) to maximise learning & action-ability.

That is where the real value lies.

Another tip: Using learning from your existing research, including qualitative, can help frame additional questions to capture following CES. You can then use analytics to identify correlations. Having such robust regular quantitative data capture is much more valuable than being ‘right’ about your lead metric.

What’s your experience with CSat, NPS or CES? Do you share my concerns?