Skip to main content

Verified by Psychology Today

Heuristics

Anchoring Base Rates

Fight fire with fire and beat bias with bias.

Two wrongs don't make a right, but they make a good excuse. ~ Thomas Szasz

One of the most famous findings in the psychology of prediction is the phenomenon of base rate neglect. People mainly rely on judgments of representativeness. They assign an instance to the category whose prototype it resembles the most without regard to the relative size of the category. When the category is very small, the heuristic of representativeness leads to systematic overcategorization. Physicians overdiagnose rare diseases when the symptoms reflect the disease’s typical pattern, ordinary people overestimate the size of stereotyped minority groups, and scientific significance testers are too accepting of improbable alternatives to the null hypothesis.

As noted in a previous post, the traditional remedies for heuristic decision-making are efforts to debias people by (a) warning them about the pitfalls of heuristics, (b) offering rewards for accurate judgment, and (c) extracting multiple judgments followed by swift feedback, or a combination of these. The common theme of these strategies is that they seek to improve judgment with appeals to some rational reasoning capacity, which is sometimes referred to as “System 2 reasoning” – which sets them apart from the intuitive reasoning attributed to “System 1.”

There is an alternative to this approach. It should be possible to improve judgment if two heuristics with opposing biases are simultaneously activated. Recall that base rate neglect resulting from the representativeness heuristic constitutes a failure to incorporate relevant information into the judgment. To nullify this error, a heuristic is needed that does exactly the opposite, a heuristic that involves the use of information that should be ignored. The anchoring heuristic fits this description. When anchoring, people who try to make a quantitative estimate fail to ignore a number they have been exposed to and which they recognize as irrelevant. To illustrate, respondents first reject the suggestion that the population of Greenland is larger (smaller) than 10 million (1,000), but then estimate the island’s population to be higher (lower) than they would had they not received an anchor.

The possibility now presents itself that a base rate probability, when set up as a judgmental anchor, will no longer be neglected. Perhaps two wrongs can make a right. To test this hypothesis, Gideon Goldin, Leonard Chen, and the students in a course on social cognition at Brown University conducted a simple experiment. Participants received the classical description of Jack, who is representative of the category of engineer (“He shows no interest in political and social issues and spends most of his free time on his many hobbies, which include home carpentry, sailing, and mathematical puzzles.”) or Dick, who looks neither like an engineer nor like a lawyer (He is “a man of high ability and high motivation, he promises to be quite successful in his field. He is well liked by his colleagues.”). All participants were informed that the percentage of engineers (lawyers) was 70 percent or 30 percent in the group from which Jack’s (Dick’s) description was randomly sampled.

Half of the respondents simply estimated the probability that Jack (Dick) was an engineer. This part of the experiment was set up to replicate Kahneman & Tversky’s (1973) original finding of base rate neglect. The replication was partially successful. Jack was judged to be more likely to be an engineer when the base rate probability of being an engineer was high (M = 77 percent) than when it was low (M = 66 percent), t(58) = 2.25, p = .03. However, Dick was not judged to be significantly more likely to be an engineer given a high (M = 53 percent) instead of a low (M = 44 percent) base rate, t(58) = 1.64. Note that the statistical demonstration of complete base rate neglect in this paradigm requires a failure to reject a null hypothesis. Our data failed to replicate this failure in the case of Jack. Some reviewers of this field of research have concluded that base rate neglect is rarely as complete as it was in Kahneman & Tversky’s original demonstration (Koehler, 1996).

Given only partial base rate neglect, the novel idea of reducing this neglect with an anchoring manipulation had only limited room to show itself. To test this idea, we asked the other half of the participants to first respond to the question “Would you say that the probability that Jack (Dick) is an engineer is greater than 70 percent (lower than 30 percent)? Virtually all responses were “no.” That was as it should be for most conditions because no rational or heuristic argument could be made for estimates lying beyond the base rates. The exception to this rule is Jack, who, looking like an engineer, should (and did) receive an estimate greater than 70 percent when the base rate was 70 percent. In this exceptional case, anchoring had no effect. Jack was judged to be an engineer with an average chance of 77 percent and 78 percent, respectively with and without the anchor. Conversely, when participants first rejected the anchor of 30 percent as implausible they judged Jack to be less likely to be an engineer (M = 56 percent) than when they did not respond to the anchor first (M = 66 percent), t(58) = 4.75. This is good support for the anchoring-can-reduce-base-rate-neglect hypothesis. For judgments of Jack, the effect size of the difference more than doubled from the replication to the anchoring condition, d = .58 to 1.25. Anchoring also improved judgments of Dick. Without anchors, there was base rate neglect, as indicated by a nonsignificant difference between the two estimates. With anchors, the difference between M = 54 percent and M = 39 percent was significant, t(58) = 3.52, p = .001). The effect size doubled from d = .43 to .91.

There is, it seems, a heterodox way to beat a bias. Pit two conflicting heuristics against each other. With that, the deliberative System 2 (if it exists) can get out of the way; unless, of course, we assume that it is the task of System 2 to set up the anchoring question if there is no Gideon or Leonard around to serve as guides to rationality.

Kahneman, D., & Tversky, A. (1973). On the psychology of prediction. Psychological Review, 80, 237-251.

Koehler, J. J. (1996). The base rate fallacy reconsidered: Descriptive, normative, and methodological challenges. Behavioral and Brain Sciences, 19, 1-53.

advertisement
More from Joachim I. Krueger Ph.D.
More from Psychology Today