Kirill Kedrinski - Fotolia

Tip

10 types of biases that affect customer data analysis

Analytics can help businesses make data-driven decisions, but common cognitive biases can skew how organizations interpret the information and CX.

Businesses are increasingly looking to make data-driven decisions to improve customer experience, but it can be a wasted effort if human biases skew customer data analysis.

Businesses collect customer information and feedback in a number of ways, including web surveys, phone surveys, customer service departments, customer focus groups and mystery shopping audits. Analytics can assist business strategy and decision-making by providing a fresh perspective on a company's customer data set or by identifying correlations that humans miss. But the human element is still very much a factor when it comes to developing query hypotheses and converting data reports into an actionable business plan.

It's worthwhile for businesses to think about how different cognitive biases can affect customer data analysis.

Here are 10 examples of cognitive bias that can skew customer data, with emphasis on how they affect analytics-based conclusions.

Examples of customer bias

    1. Clustering illusion. Wishful thinking is the most common of human tendencies. In all areas of life and thought, people have the potential to see what they want to see and miss what's really there. In an analytical result, that's a particularly strong risk, as analytics are all about finding patterns in noisy data. Organizations should avoid this mistake by scoring results scrupulously and maintaining a commitment to accepting the numbers honestly.
    2. Selective perception. The sibling of clustering illusion, selective perception is the interpretation of analytical outcomes in ways that confirm what the audience expected them to say. Expectations in general are precarious in the human experience, and often the outcomes aren't preferable. But buying into expectations obfuscates the entire purpose of analytics.
    3. Confirmation bias. Confirmation bias, which is an example of selection bias, is familiar to almost everyone who has ever argued about politics and/or religion. People tend to accept only new information that supports old ideas. This is just as likely -- and even more dangerous -- in the realm of analytics, where outcomes can influence decisions at the highest levels. The entire point of analytics as a strategic tool is to push beyond old ideas into more effective choices and policy.
    4. Survivorship bias. Businesses prefer good news over bad news. It's a natural tendency to focus on positive outcomes that appear in analytical results, ignoring the negatives. But this truncates the power of analytics. Organizations must give both positive and negative outcomes equal weight to learn from the data.
    5. Ostrich effect. And when the news is bad, businesses may cover their ears. Often people try to ignore or argue against conclusions that aren't what they want to hear. This is especially problematic in analytics, where businesses generally cull the results from several different sources and score them objectively; the ideal is to embrace even bad news that analytics deliver, because organizations can be reasonably certain that addressing that bad news will be effective.
    6. Bandwagon effect. When businesses find themselves riding a wave, they tend to go with the wave. If an analytical result seems to put them into a positive industry trend, they may buy in all the way, even if the result is weak. It's important in that circumstance to stay off the bandwagon and give the result the weight it deserves, no more, no less.
    7. Outcome bias. A "little brother" of bandwagon effect is the outcome bias: over-trusting a process because it worked well once. Businesses should not underestimate the danger here; businesses must constantly fine-tune analytical processes to remain effective. Organizations that lock in on an analytical process because it delivered a positive result, and forego continuous fine-tuning and critical scrutiny of the results, are asking for trouble.
    8. Pro-innovation bias. Cognitive studies have shown that human beings tend to overemphasize both similarities and differences between ideas that fall into different categories. People do this more when evaluating a new idea. People tend to overvalue a new idea's unproven usefulness, and undervalue its probable shortcomings. This is a natural consequence of human enthusiasm. But if businesses apply analytics to evaluate some new product or process or service, realism is always the best choice.
    9. Information bias. People can become focused on information or trends or details that really have no effect on the outcome they are pursuing. This bias is hard to root out because it's often beneficial to toss unknowns into the input of a data mining operation, and businesses often can't know what was really important until the results are in and the outcomes are apparent. The second time around, though, it's easier to prune the information that does and doesn't matter -- and emotional attachment to details that don't have a substantive effect is an unnecessary distraction.
    10. Blind-spot bias. Finally, blind-spot bias is the failure to recognize a person's own biases. Of all the tendencies in bias, this one is both the most obvious and the hardest to overcome. For this reason, it's a wise practice to employ analytics scientifically; that is, subject them to peer review. The fix for blind-spot bias is to put more than one set of eyes on every analytical result to ensure that no one person's biases distort an important outcome.

How to avoid bias in customer data analysis

The most important things businesses can do to avoid bias when collecting customer feedback is to pose questions in a neutral way.

Biased question: How much did you enjoy the food at our restaurant?

Unbiased question: How was the quality of the food you ordered at our restaurant?

Organizations must give both positive and negative outcomes equal weight to learn from the data.

In the biased example, the question assumes that everyone enjoys the restaurant's food. In the unbiased example, however, the business makes no assumptions and simply asks for customer feedback -- negative or positive.

Organizations should also collect both qualitative and quantitative data to learn more about a customer's experience. For example, on a survey a business may ask if the restroom in their establishment was clean. Answer choices may include responses such as strongly agree, agree, neither agree nor disagree, disagree and strongly disagree. These are examples of quantitative responses. But it's important the businesses don't stop there. They should take it one step further and provide a place for customers to provide an open-ended response, asking them to explain why they answered as they did to avoid guesswork on the business's part.

Companies can also avoid bias by using a net promoter score (NPS). An NPS calculates a customer's loyalty to a brand by asking on a scale from zero to 10, "How likely is it that you would recommend us to a friend or colleague?" It is important to note, however, that because this is a quantitative measure, it's important to also follow it up with an open-ended question.

Organizations should also be sure to target a specific audience, ask the right questions and collect the proper data to avoid bias. For example, if an airline wants to gather customer feedback from a first-class flight, it shouldn't send the survey to the group of people traveling in coach. This will skew information, and the airline won't collect accurate information.

Next Steps

Predictive models should focus on business value

Target specific business goals with predictive models

CRM analysis can boost knowledge about customers

Dig Deeper on Customer data management

Content Management
Unified Communications
Data Management
Enterprise AI
ERP
Close