17 Jan 11 Reading the numbers right
“Should policymakers trust the IIP or the PMI numbers?” an article I saw in livemint this morning cried out. “The RBI had, in its mid-term monetary policy statement, raised doubts about how effectively the index reflects the underlying momentum in the industrial sector”, it said. CI analysts face similar dilemmas with numbers all the time.
They are bombarded with different numbers – at their own behest. ‘You can’t manage what you can’t measure’ is the management adage most companies live by, and this extends to measuring the competitive environment too.
Yet how much can you trust the numbers thrown at you? For example, the sales force pushes one set of market figures. The industry organisation publishes its own market estimates. You hire an external agency, and it comes up with yet another set of numbers. The problem is particularly acute in unorganised segments of the market.
The numbers may all contradict each other, and yet they may all be right – for they may be measuring different, albeit similar parameters. The IIP and PMI in the above example, both measure the level of economic activity. However, the IIP figures measure y-o-y changes in industrial activity, whereas the PMI measures m-o-m changes. This alone makes the two not comparable. The policy maker can trust both numbers, but draw different conclusions from them.
Here are the three key things to look at for each number you use.
What is the number measuring?
The sales force may have measured the market for models only directly competing with yours, the research agency may have defined the market as comprising of all products competing with yours, while the industry association may be looking at a yet broader definition of the market.
How was the measurement been done?
The methodologies used by each of the three agencies could be different. The sales force may have used random data points from their network of external contacts to estimate the market. The research agency may have used sampling, whereas the industry association will most likely have relied on inputs from their members. There is ample scope for different types of biases in all three approaches. The sales force has a limited reach and significant vested interest in under-reporting market size. The research agency’s methodology needs to be vetted to check its statistical robustness. And the coverage of the industry association will determine the accuracy of its figures.
When was the measurement done?
Association numbers could be end-of-the year numbers, so their relevance depends on which time of the year you look at them. Numbers from your sales force are possibly more recent. When did the research agency actually conduct the study?
Now that you know where each number has come from, you need to figure where, when and how to use each of them…