Skip to main content

Verified by Psychology Today

Decision-Making

The Seven "Irrational" Habits of Highly Rational People

How our intuitions about rationality can lead us astray.

Key points

  • If someone made perfectly accurate judgments and sound decisions, would we recognize it?
  • The science and philosophy of judgment and decision-making suggests that the answer is often “no.”
  • Suggestions about how we can distinguish the genuinely rational from the irrational.

Part One of Two

The importance of recognizing what's rational and what's not

If someone was as rational as could be—with sound decisions and many accurate and trustworthy judgments about the world—would we recognize it? There are reasons to think the answer is “No.” In this post, I aim to challenge prevailing intuitions about rationality and argue that the philosophy and science of judgment and decision-making reveal several ways in which what appears to be rational diverges from what actually is rational.

This post takes its title from Stephen Covey’s well-known book “The Seven Habits of Highly Effective People.” I argue that, similarly, there are seven habits of highly rational people—but these habits can appear so counter-intuitive that others label these habits as “irrational.” Of course, the rationality of these habits might be obvious to specialists in judgment and decision-making, but I find they are often not so obvious to others of the sort for whom this post is written.

In any case, not only are these habits potentially interesting in their own right, but recognizing them may also help to open our minds, to help us better understand the nature of rationality, and to better identify the judgments and decisions we should trust—or not trust—in our own lives.

The seven "irrational" habits of highly rational people

1. Highly rational people are confident in things despite “no good evidence” for them:

The first habit of highly rational people is that they are sometimes confident in things when others think there is no good evidence for them. One case where this shows up extremely clearly is the Monty Hall problem, as I discuss in detail in a post here.

In the problem, a prize is randomly placed behind one of three doors, you select a door and then the gameshow host—Monty Hall—will open one of the other doors that does not conceal the prize. If you select a door and that door conceals the prize, then Monty Hall will open either of the other two doors with an equal likelihood. But if the door you select does not conceal the prize, then Monty Hall must open the only other door that does not conceal the prize and one you did not select.

In these circumstances, as I explain in the post, if you select door A and Monty Hall opens door C, then there’s a two-thirds probability that door B conceals the prize. In this case, then, door C being opened constitutes “evidence” that door B conceals the prize. Furthermore, let us consider an adaptation called the “new Monty Hall problem." In this case, door C would be opened with a 10 percent likelihood if door A conceals the prize, in which case there’s provably a 91 percent probability that door B conceals the prize after door C is opened. In this version, the truly rational response is to be very confident that door B conceals the prize.

But despite this, in my experiments, everyone without training who encountered these problems got the wrong answer, and the vast majority thought door B had only a 50 percent probability of concealing the prize in both versions of the problem. This effectively means they thought there was no good evidence for door B concealing the prize when there in fact was!

What’s more, the studies found that participants did not recognize this good evidence, and they were also more confident in their incorrect answers. Compared to participants who were trained and more likely to get the correct answers to these problems, the other participants were on average both more confident in the correctness of their (actually incorrect) answers and they thought they had a better understanding of why their (actually incorrect) answers were correct.

What this shows is that truly rational people may recognize objectively good evidence for hypotheses where others think there is none—leading them to be confident in things in ways that others think are irrational. In the post, I also discuss some more realistic scenarios where this in principle could occur—including some from medicine, law, and daily life.

2. They are confident in outright false things:

But even if someone is rationally confident in something, that thing might be false a particular proportion of the time.

In fact, according to one norm of trustworthy judgments, a perfectly accurate person would often be 90 percent confident in false things approximately 10 percent of the time. In other words, a perfectly accurate person would be “well calibrated” in the rough sense that, in normal circumstances, anything they assign 90 percent probabilities to will be true approximately 90 percent of the time, and anything they assign 80 percent probabilities to will be true approximately 80 percent of the time and so on.

We can see this when we look at well-calibrated forecasters who might assign high probabilities to a bunch of unique events, and while most of those will happen, some of them will not—as I discuss in detail here. Yet if we focus on a small sample of cases, they might look less rational than they are since they will be confident in outright false things.

3. They countenance the “impossible” and are “paranoid”

However, studies suggest many people—including experts with doctorates in their domain, doctors, jurors, and the general public—are not so well-calibrated. One example of this is miscalibrated certainty—that is when people are certain (or virtually certain) of things that turn out to be false.

For instance, Philip Tetlock tracked the accuracy of a group of political experts’ long-term predictions, and he found that out of all the things they were 100 percent certain would not occur, those things actually did occur 19 percent of the time. Other studies likewise suggest people can be highly confident in things that are false a significant portion of the time.

But a perfectly rational person wouldn’t be so miscalibrated about these things, which others are certain about, and so they would assign higher probabilities to things that others would think are “impossible.” For example, a perfectly calibrated person would perhaps assign 19 percent probabilities to the events that Tetlock’s experts were inaccurately certain would not happen—or they might even assign some of them much higher probabilities, like 99 percent, if they had sufficiently good evidence for them. In such a case, the perfectly rational person would look quite “irrational” from the perspective of Tetlock’s experts.

But insofar as miscalibrated certainty is widespread among experts or the general public, so too would be the perception that truly rational people are “irrational” in virtue of them countenancing what others irrationally consider to be “improbable” at best or “impossible” at worst.

Furthermore, when one has miscalibrated certainty about outcomes that are “bad", not only will a rational person look like they believe in the possibility of “impossible” outcomes, but the rational person will look irrationally “paranoid” in doing so since the supposedly “impossible” outcomes are bad.

Continue to Part Two

advertisement
More from John Wilcox Ph.D.
More from Psychology Today