Skip to main content

Verified by Psychology Today

Motivated Reasoning

Our Internal Predictive Algorithms

Biases and motivated reasoning can make our predictions less accurate.

Key points

  • It's not unusual for people to rush to judgment based on limited information.
  • Once a judgment has been made, it's difficult to change without overwhelmingly convincing evidence to the contrary.
  • In sports, this kind of flawed decision-making leads to losing games; in the real world, the consequences may be even more severe.
 Matt Grawitch, Ph.D.
Source: Matt Grawitch, Ph.D.

My wife and I have a cat named Caveat, and he operates under a set of flawed predictive cat algorithms[1]. If you’re sitting on the couch and decide to stand up, he thinks you’re planning on giving him treats and subsequently runs to the kitchen. Heading down the hallway? You must have to use the restroom, he predicts, so he races in there ahead of you[2]. Unfortunately for him, his predictive cat algorithms often result in erroneous conclusions about human behavior. And yet, he repeatedly makes these same predictions.

Obviously, the cat’s algorithm is correct sometimes. He does get treats (though not as often as he apparently thinks he should), and his humans, from time to time, do use the restroom. And yet, his predictions operate with such low precision, often because he makes a snap judgment based on very limited information, immediately jumping to conclusions and acting on them. Humans often do the same thing (though our precision tends to be a wee bit better).

Rushing to judgment

When we make a split-second judgment based on limited information, one of the following inevitably results:

  1. We maintain our confidence in it, even if that initial judgment is erroneous.
  2. We revise it as we acquire information that conflicts with our original judgment.
  3. We become more confident in it as we acquire information that is aligned with our original judgment.

In many ways, these judgments operate like a predictive algorithm, though the algorithm itself is kind of a black box at this point[3]. We don’t know all that much about the factors that lead us to make situation-specific predictions, nor do we know much about when and why we revise our initial predictive judgments. There is likely variability among people, due to factors like genetics and past experiences, that influences what judgments we make and how wedded to those judgments we are (and, subsequently, our likelihood of revising them). There are, however, a few things we’ve figured out, and to help explain them, I’m going to turn to one of my favorite pastimes: baseball.

The more confident we are in an initial judgment, the more convincing the evidence we’ll need to revise that judgment.

As I mentioned elsewhere, what we expect or predict to happen forms the basis of our biases, and the stronger the bias, the more likely we are to be wedded to conclusions stemming from that bias. Part of the reason for this concerns the amount of confidence we place in our predicted conclusion. Higher confidence in the initial decision generally means a higher likelihood of sticking with that decision.

We see this a lot in baseball when managers continue to make what seem to be the same bad decisions over and over (e.g., regularly starting a position player batting .198 over another player batting .288[4], playing lefty-righty matchups, maintaining the same batting order for several games even if it hasn’t produced offensively). Upon questioning, they will often report being confident that “[Player A] is about to get hot,” that “statistics show…” or “this is our guy.”

These responses indicate a high degree of confidence in those initial judgments, which makes it easier to remain wedded to them, even when a preponderance of the evidence doesn’t support those decisions. It might seem as though managers should just look more closely at the evidence to make better decisions, but that isn’t necessarily going to result in different decisions because…

It is often easy to find evidence to support our initial judgment.

When we have high confidence in our initial judgment, it can trigger a conflict in our motivation. On the one hand, we are and should be motivated to make a defensible decision. On the other hand, we are also motivated to maintain our sense of self, and a part of that sense of self involves a sense of competence[5]. Ergo, we often end up motivated to defend our initial judgment, also known as motivated reasoning, which may manifest as confirmation bias.

In baseball, this often becomes evident in post-game press conferences with the manager. Pitcher A has a 7.00 ERA since the All-Star break, but the manager continues to bring him in to try to close games, resulting in a series of blown saves. Why? “He’s our guy,” or “He’s been really good most of the season,” or “He just had a bad night” all indicate a defense of a decision that may have been made in error. Although any of these claims may be accurate, sometimes a preponderance of the evidence tells a more nuanced or different story (which is what led to questioning the decision to begin with).

Now, it isn’t necessarily the case that motivated reasoning and confirmation bias are going to cause us to stick with a decision, though it certainly increases the likelihood of that happening. It also doesn’t mean, as I’ve pointed out before, that motivated reasoning necessarily leads to erroneous decisions. Unfortunately, though…

When our initial judgment is incorrect and motivated reasoning is strong, we are unlikely to revise our judgment unless and until there is no choice but to do so.

When we are extremely wedded to a particular judgment, it becomes quite difficult to revise that judgment even when we should. In such instances, the best we can hope for is that we either (1) amass enough counter-evidence of sufficient weight that our confidence in the initial judgment weakens, or (2) we have a benign enough detrimental experience that causes us to rethink that initial judgment. Unfortunately, neither of these two outcomes is necessarily going to occur, in which case we may not have the opportunity to revise our judgment until it is too late.

In baseball, this is easy to highlight, as managers regularly make decisions about when to put in relief pitchers and when to pull them. Sometimes, the reliever who’s been put in, whether it be a middle reliever or a closer, simply isn’t able to get the job done. And yet, managers will often wait until it’s too late to adjust. A case in point was from a game between the St. Louis Cardinals and the Pittsburgh Pirates on August 26, 2021[6]. The Cardinals led 7-3 going into the bottom of the 7th, and according to FanGraphs, Pittsburgh had only a 6.7 percent chance of winning the game at that point. Enter a reliever who was permitted to stay in the game to give up six consecutive hits, including a home run, while failing to record a single out, increasing the win probability for the Pirates to 97.2 percent. The decision not to pull him was that the specific reliever had been really good and that managers can’t pull guys just because they’ve given up “a couple of hits”[7].

Although it is certainly true that we must be careful that we don’t overreact when evidence contradicts our initial judgment, we also must be careful that we don’t underreact. In the example I provided (and many others that could be added to it), the evidence was mounting, but because the manager was overly confident in his initial judgment, he failed to revise that judgment until it was too late.

So what?

The moral of the story here is that our initial judgments can be flawed. The more confident we are in those judgments—especially when we’re motivated to maintain those judgments for reasons other than accuracy—the less likely we are to adjust to evidence that contradicts our judgment and the more likely we are to make an erroneous decision. Although baseball isn’t life or death (don’t tell that to rabid fans), the same issues can surface in important life decisions, such as decisions concerning COVID health and safety decisions. That’s why it is important to recognize when we might have a motivational conflict and try to flip the script on our motivated reasoning.

References

Footnotes

[1] We have three cats, but only one of them makes such a host of erroneous assumptions about our intentions.

[2] Those who use the restroom are captive prisoners of petting demands.

[3] In some ways, it’s like complaints about machine learning and how those predictive algorithms develop.

[4] For those who are not baseball fans, this statistic indicates the player with the higher average has a 10% higher chance of getting a hit, which is a 31% better batting average.

[5] Which, along with autonomy and relatedness, form the foundation of self-determination theory.

[6] The chart in this example can be found on FanGraphs here.

[7] You can find the post-game manager interview here.

advertisement
More from Matt Grawitch Ph.D.
More from Psychology Today