Skip to main content

Verified by Psychology Today

Bias

Most Cognitive Biases Probably Don’t Cause Errors

Why cognitive biases are overhyped as an answer to why we get stuff wrong.

Key points

  • Cognitive biases are generally defined as being systematic errors in reasoning, but this definition is flawed for several reasons.
  • There is little empirical evidence showing that biases produce recurring or consistent errors in real-world decision making.
  • Biases are likely to aid decision making more often than they are to hurt it.
  • There are, however, situations in which overreliance on biases can lead to erroneous decisions.
Alphavector/Shutterstock
Source: Alphavector/Shutterstock

When you come across articles about human decision-making, whether in the popular press or scientific literature, the probability is high that the expression "cognitive bias" is likely to be used. In many cases (especially in the popular press), the entire article will be focused on that topic.

Let's start off with a definition of cognitive bias. Tversky and Kahneman (1974) referred to cognitive biases as “systematic errors” (p. 1124) in reasoning. However, as I will explain, this definition is highly flawed.

Most Cognitive Biases Don’t Produce Systematic Errors

Systematic errors can be defined as consistent or recurring errors that are predictable. Although Tversky and Kahneman (1974) referred to cognitive biases as “systematic errors” in reasoning, in the exact same sentence, they also claimed that, “in general, these heuristics [which they claimed were the cause of cognitive biases] are quite useful” (p. 1124). This leads to an interesting conundrum: if reliance on heuristics that lead to biases are quite useful most of the time, how is it that they produce systematic errors?

A reasonable response to this question could rely on an argument grounded in the "law of the instrument." This idea is that, if you are over-reliant on a decision-making strategy, you will tend to rely on it in situations where it is inappropriate, thus introducing a systematic flaw into your decision-making. The problem with this explanation, though, is that for it to be adequate, we must assume the source of the error. In other words, we must assume that if you entered decision situation X and you produced an erroneous decision, the only viable reason for this is that your cognitive biases introduced error into your decision-making. It is further assumed that if you did not reach an erroneous decision, then obviously your decision was devoid of any cognitive bias[2].

Now, if you peruse the scientific literature on this topic, you’ll find studies that show rather large effects of biases. Even if we accept these results as being generalizable to real-world decision-making, that still wouldn’t address the degree to which the errors were systematic in nature. To be considered systematic, we would have to know that you tend to make the same mistake in highly similar decision situations. Yet, very little, if any, empirical evidence exists to support the claim that cognitive biases are systematic.

But let’s assume that what Tversky and Kahneman really meant to argue was that these errors tend to be prevalent across people. That would explain why they tend to find such strong effects. The problem is that most evidence from real-world settings or using real-world problems shows that people are not plagued by systematic reasoning errors at all.

For starters, let’s turn to Furnham and Boo (2011), who provided a review of the "anchoring effect." Their review argued that anchoring “results from the activation of information that is consistent with the anchor” but only if “judges consider the anchor to be a plausible answer” so they “test out the hypothesis that the anchor value is the correct value” (p. 37). Furnham and Boo also reported that when anchors are obviously incorrect (i.e., extremely low or high) or when judges have a high degree of confidence in the answer, anchors have little effect on judgment. The evidence, therefore, doesn’t support the claim that people are consistently biased to rely on anchoring. Instead, in the absence of actual expertise or experience in judging some phenomenon, we’ll look to potentially relevant information available to help inform our judgments. Anchors serve this purpose (possibly via a form of information leakage).

We can also turn to evidence presented by Pennycook and Rand (2018) as it relates to fake news and people’s alleged susceptibility to cognitive biases[3]. In all their studies, both those who relied on intuitive decision-making (where cognitive biases are likely to produce errors) and those who relied on effortful decision-making (which ostensibly overrides cognitive bias errors) were able to differentiate real from fake news consistently. At best, the results indicated that those who relied on effortful decision-making had a slight edge in this detection. However, by no means was this an overwhelmingly strong association.

Lastly, let’s turn to a study by Schnapp et al. (2018), which focused on medical errors in an academic emergency department. If cognitive biases were such a prevalent phenomenon, we would expect to see a lot of revisits stemming from bias-induced faulty decision-making. However, after reviewing eight months of data that represented approximately 104,000 visits to the emergency room[4], the researchers found only 271 cases of revisits (0.3 percent), and of those, only 52 were identified as being the result of some sort of cognitive error on the part of the physician. Thus, either cognitive biases weren’t prevalent (and, therefore, produced few errors), or they were prevalent but did not produce erroneous decisions consistently (since cognitive errors weren’t even responsible for the majority of revisits).

The literature is full of evidence just like the studies I reported here. Most cognitive biases really show up in laboratory designs where subjects are dealing with situations in which they have no real working knowledge (i.e., they lack expertise or experience). In more real-world settings, these biases have much less consistent and pronounced effects, calling into question the degree to which cognitive biases do, in fact, produce systematic errors.

Generally, Biases Are More Likely to Aid Decision-Making Accuracy

We know people possess biases and that stronger biases can have more pronounced effects. The more convinced we are of something, the less likely we are to change our opinions or derive an alternative conclusion. This is especially likely to occur in situations where the evidence is less overwhelming or more ambiguous[5].

But cognitive biases aren’t discussed in the same context as biases in general, as biases themselves are not necessarily faulty at all. Biases represent nothing but a tendency of some kind, and they tend to be quite adaptive most of the time (Haselton et al., 2009). Yet, cognitive biases are explicitly specified to produce error. How can cognitive biases somehow be different than regular biases?

The answer, of course, is that they can't. They’ve come to be defined as erroneous as a result of researchers studying tasks that cause specific biases to lead to incorrect decisions while ignoring the times those tendencies lead to correct conclusions. In other words, cognitive biases have become defined because of a selection effect[6] where the conclusions and inferences made are based on the selection of situations intended to confirm the hypothesis (a form of confirmation bias) [7].

So, Where Does That Leave Us?

I don’t want to leave the impression that human decision-making is fundamentally error-free. People certainly make plenty of errors in their decision-making. And, yes, sometimes biases produce errors. The likelihood that biases will introduce errors increases in situations in which we:

  1. Have limited expertise or experience related to the decision situation, making it difficult to rely on validated heuristic decision strategies[8].
  2. Perceive one error to be much more costly than another, leading to a bias toward the least costly error (the phenomenon underlying error management theory).
  3. Allow strongly-held (and often self-serving) values or beliefs to adversely affect our decision (also called motivated reasoning)[9].
  4. Possess conflicting, ambiguous, or limited information, leading to a greater propensity to rely on emotions or to default to more bias-driven decision making[10].

Therefore, biases can introduce error, but to conclude that human decision-making is fundamentally flawed due to these biases is an erroneous claim.

References

Footnotes

[2] There’s also the issue of how we evaluate accurate vs. erroneous decisions without just looking at the outcome itself because otherwise we would, based on this paradigm, be engaging in outcome bias.

[3] The test used most often is some adaptation of the Cognitive Reflection Test, which I touched on when writing about the false dilemma put forth in discussions of System 1/System 2 decision making.

[4] I estimated this based on their claim that the hospital received approximately 156,000 annual visits, which equates to 13,000 per month. The study occurred over 8 months, which translates to approximately 104,000 total visits.

[5] Perhaps the best example of this was a study conducted by De la Fuente et al. (2003) regarding jury bias.

[6] This could be called selection bias, but I hesitate to equate bias with error in this regard.

[7] Haselton et al. (2009) argued that biases are often the result of heuristics we employ for deriving fast, frugal, and (quite often) accurate conclusions. Brighton and Gigerenzer (2012) further added that it is a myth that “more time, more information, and more computation would always be better” (p. 7).

[8] This is the phenomenon that produces most laboratory-based evidence of cognitive biases.

[9] This assumes accuracy itself is the sole goal and that there is no benefit to other self-serving interests. Ultimately, it boils down to the subjectively derived acceptability of trade-offs.

[10] Whether this class of errors would really be considered actual error is open to debate.

advertisement
More from Matt Grawitch Ph.D.
More from Psychology Today