27 Feb 14 Customer perception: One size does not fit all
Typically, in a Voice of Customer (VoC) study, marketers seek to understand customer perceptions across various parameters; typically pricing, brand, quality, after sales service, and so on. And at the end of the exercise, they want aggregated scores on a numeric scale, often with comparative benchmarks for competitors.
If they get a decent score or one better than the competition, they usually assume that things are fine. But are they?
Aggregated or average ratings tend to hide problems, and potential opportunities. Numerically low ratings from a few can get overshadowed by the majority. Yet, these could be crucial early warning signals.
Another aspect that is often ignored is the relative importance of various factors in decision making. In a recent study for a fairly commoditized product, our client’s products came out on top in terms of quality. A majority of respondents also rated product quality as one of the most important selection criteria. However, while triangulating the information, we found that actual decision making was quite different. Even though respondents said quality was important (who wouldn’t?!) we found that in actual purchases, many had selected apparently lower quality products. Our analysis eventually revealed that product quality was not really the crucial differentiator in the purchase decision – as all the competing products were near identical. It all boiled down to cost and delivery lead times.
Wide differences across end-use applications or sectors are common. In a research project on industrial printers, we found marked differences in what customers in different user segments considered most important. For cement industry customers, price was crucial – while for auto industry customers, precision was most important. The explanation in this case was fairly obvious, and related to importance of labeling for each product.
In other situations, it may be much harder to understand why perceptions differ. While looking at how buyers chose vendors for engineering design services, we found that aerospace customers valued customer referrals the most while respondents from the auto sector were most concerned about domain or industry knowledge. But are these really different? Isn’t referral a means of assessing competency? Or was it about something else? We couldn’t conclude on this, and had to speak to respondents again – a sign that research design was not robust enough!
The above may seem fairly obvious, but require that the sample is sufficiently robust (within each end-use application or sector), and every question has intelligent probes to draw out the ‘why’. Without this, mere numbers will not deliver meaningful insights.
Another trap is to restrict the analysis only to questions that were directly asked. While studying power equipment customers, our client’s after sales service rated quite well in aggregate. However, when we drilled down we found large (institutional) customers were happy, while small customers had a mixed response. When we looked closer, we saw a big variance by city – especially in tier II and tier III towns and cities in India. There did not appear to be any rational explanation, as there was no correlation between location of service centers and responses from customers. After puzzling over it for a while, we discovered that most small customers actually didn’t have service contracts, and preferred to get their equipment serviced by independent/third-party providers – and not from the company’s service centers/dealers. This insight helped our client understand that in small towns with fewer institutional customers, the lack of high-quality independent service providers was more critical to their brand perception that the presence of their own centers. They’re now putting in place a strategy to work more closely with such independent providers.
Going back to the title of this blog, one size cannot fit all when it comes to Voice of Customer studies. Yet we still find many companies insist on standard templates, and scores that hide more than they reveal. Maybe that’s because VoC is more often done for compliance, rather than to derive real insights!