Skip to main content

Verified by Psychology Today

Heuristics

Positive Heuristics

Strategies for engaging in speculative thinking.

About 40 years ago Danny Kahneman and Amos Tversky made some wonderful discoveries. They identified a set of heuristics that people use — availability, representativeness, anchoring and adjustment, even drawing inferences from small samples. Previously, thought-leaders like Karl Duncker and Alan Newell and Herb Simon, had discussed the importance of heuristics, but Kahneman and Tversky actually identified a set of specific types of heuristics that we commonly apply, and for this, Kahneman and Tversky deserve the accolades and prizes they received.

However, the Heuristics and Biases community that sprung up from their work took an unfortunate trajectory. It equated heuristics with biases. The term “bias” can mean preference or predisposition but the primary understanding is that a biased judgment is not logical or justified. This conflation made some sense because the research methodology used by Kahneman and Tversky and others was to demonstrate that people use heuristics even when the heuristics yield inaccurate judgments. The studies, therefore, illustrated how heuristics can mislead us, but this demonstration is not the same as showing that we would be better off without the heuristics. Yes, under certain circumstances that researchers could design, the heuristics get in our way. But there are many other circumstances in which the heuristics are invaluable.

I think that the community has been using an inappropriate yardstick: evaluating the accuracy of the heuristics in comparison to formal analytical methods such as probability theory and Bayesian statistics. Bayesian statistics only came into prominence in the 1980s. Probability theory achieved its current formulation by Laplace just over 200 years ago. Why would we expect that the common heuristics that we use would match formalisms such as Bayesian statistics and probability theory? It’s like trying to eat soup with a fork and then blaming the fork for being poorly designed.

Consider research by Lichtenstein et al. (1978) showing that participants, typically college students, held inaccurate beliefs about the frequencies of different causes of death. The participants overestimated sensational causes, such as tornados, floods, homicides, and accidents — causes likely to receive media coverage — and underestimated silent killers that received little media attention such as asthma, tuberculosis, stroke, and diabetes. So, yes, the participants were inaccurate but how were they supposed to know the actual data? Were they supposed to have pored through the archives and committed the findings to memory? What does it mean to accuse the participants of bias for lining up with the media reports? I agree with Lichtenstein et al. that inaccurate beliefs will affect public policy, resulting in an inefficient allocation of funds for low-frequency but dramatic causes. My problem is that I don’t see what we gain by labeling the participants as biased because they used a reasonable, although limited, judgment strategy.

Today we see a popular claim that people are irrational. Even experts are often maligned in this way, part of what I have called a War on Experts.

The assertion that humans are inherently irrational makes little sense. The argument is based on an inappropriate standard. Certainly, we should be using more powerful analytical and statistical methods where appropriate (although the application of these methods is not always as straightforward as their adherents suggest). And we shouldn't automatically trust the judgments stemming from intuition and heuristics. Yet, there is more to decision making and sensemaking than performing risk assessments.

Fortunately, I think there is a better yardstick for appraising heuristics: Speculative thinking. People don’t often have the luxury of making judgments and decisions backed by clear and copious data. We typically have to stretch, building arguments out of fragments. We have to speculate rather than analyze. Ben Shneiderman refers to this type of reasoning as “frontier thinking”: dealing with incomplete, incorrect, and contradictory information to make decisions.

And that’s where Kahneman and Tversky’s heuristics come in. They are cognitive tools we employ in order to speculate. We make speculative leaps based on small samples. We rely on the availability of precedents in our memories. We use estimates of representativeness. We find an anchor and work from there. That’s what I am calling Positive Heuristics. They are heuristics we depend on to navigate an ambiguous world. Heuristics that aren’t going to give us perfect answers, but can operate in spheres where we can’t have perfection.

They’re not biases that make us irrational. The positive heuristics are strengths that make us adaptive and successful.

We can add to this small set of positive heuristics, using additional heuristics that other judgment researchers have uncovered. Illusory correlation refers to our propensity to see relationships that aren’t there, but the positive side of that heuristic is that we are quick to spot connections and see patterns without waiting for comprehensive amounts of data to be collected. The simulation heuristic that Kahneman later described is a valuable means of making diagnoses and imagining consequences; it is a central part of the Recognition-Primed Decision (RPD) model I have studied. The Affect heuristic lets us take advantage of emotional reactions in order to make quick judgments of risks and benefits.

Danny Kahneman seems ambivalent about the idea of positive heuristics. He explained to me that his work with Tversky treated heuristics as mental shortcuts, and concentrated on their liabilities. Also, Kahneman and Tversky viewed heuristics as involuntary, subconscious reactions, not as tools we deliberately apply. True, earlier investigators such as Herbert Simon and George Polya had viewed heuristics as deliberate tools, but Kahneman and Tversky chose not to follow this usage. My reaction is that I don't care if positive heuristics are used subconsciously or deliberately — what matters is how they help us proceed despite confusion.

Imagine what would have happened if researchers had built upon the early discoveries of Kahneman and Tversky by taking this different trajectory — studying positive heuristics for enabling us to do speculative thinking. Researchers could be seeing heuristics as a source of strength rather than a source of bias and error, and could be evaluating heuristics by how well they let us speculate rather than by how closely their use conforms to statistical analyses.

References

Lichtenstein, S., Slovic, P., Fischhoff, B., Layman, M., & Combs, B. (1978). Judged frequency of lethal events. Journal of Experimental Psychology: Human Learning and Memory, 4: 551-578.

advertisement
More from Gary Klein Ph.D.
More from Psychology Today