28 Oct 11 Don’t let “bad data” obscure your CI goals
How do I know the data is reliable?
How can I make decisions with unreliable data?
If one piece of data is suspect, is the entire database suspect?
How do you validate published information?
How do I know whether my respondent is speaking the truth?
These were among the many questions debated during the lively panel discussion last week, at the ValueNotes Dow Jones seminar titled “Competitive Intelligence and the Information explosion”. In fact, this has been one of the recurring themes at several of our competitive intelligence events.
Interestingly, all these questions relate to quality of data. And in a country like India, where published information is patchy and often suspect, these fears reflect the worries of competitive intelligence users and practitioners.
However, this also reflects a relative immaturity in terms of CI practice. At the tactical level, there are numerous methods to validate data; via multiple disparate sources or alternate methodologies or triangulation or simply, lots more primary research.
But is data the most important piece of the intelligence pyramid?
Good insights don’t need 100% data availability and accuracy. Intelligent synthesis and aggregation can convert incomplete data into usable information. And experienced CI professionals ought to be able to draw “insights” or “actionable intelligence” even from an incomplete picture.
Without trying to trivialize the challenges of working with “bad” data, unnecessary preoccupation with data reliability could well obscure the ultimate objective – informed decision making to enhance competitive advantage.
Any piece of data is only as good as it is. Our problem, as CI practitioners is to assess the degree of reliability (or tolerance limits) and see if we can still derive “insights”. Information is never going to be complete or perfect. It’s how you use what’s available that matters!
No Comments