Skip to main content

Verified by Psychology Today

Motivation

The Collision Among Goals and Accuracy

Motivated reasoning can produce challenges when goals or values conflict.

Key points

  • Decisions are typically influenced by motivation, which involves meeting or progressing toward some goal.
  • When goals and interests collide, people often engage in motivated reasoning to balance the competing goals.
  • To counteract motivated reasoning, people can avoid making decisions when highly emotional and avoid jumping to conclusions.
Photo by Robert Anasch on Unsplash
Source: Photo by Robert Anasch on Unsplash

In my last post, I discussed Angel Hernandez and how easy it was for him to see what he wanted to see[1]. The desire to analyze Hernandez’s situation in a timely manner pre-empted a more thorough discussion of the general topic of motivated reasoning[2]. I provide that more thorough discussion here, and it all starts with goal-directed decision making.

The Importance of Goals for Decision Making

Most decisions we make, conscious or unconscious, are influenced by motivation; that is, there is an intended purpose underlying those decisions. That purpose is usually about meeting or progressing toward some goal – whether to buy a new house, which TV to purchase, when to make dinner.

However, goals do not operate in isolation from each other. They can complement each other (e.g., obtaining a promotion might allow someone to upgrade the timeline for buying a new car), but they can also conflict with one another (e.g., that promotion now requires more time at work, making it more difficult to spend time with family).

Not all goals, though, operate at a conscious level. We have many goals that operate at lower levels of consciousness, such as the desire to be a good person or the desire for autonomy. These goals may become salient only when activated by specific situations, such as those I discussed in reference to “following the science”.

These less conscious goals may stem from more group-based values, such as those discussed in Haidt’s Moral Foundation Theory (MFT) or Curry’s Morality-as-Cooperation (MAC) Theory. They may also stem from more self-serving needs and values, such as those discussed in McClelland’s Needs Theory, Deci and Ryan’s Self-Determination Theory, or the much-maligned Maslow’s Hierarchy of Needs[3].

Just as with more conscious goals, more values- and needs-based goals can complement each other, but there are also many times when they conflict. How we respond to this conflict plays a significant role in our decision making.

When Goals Conflict

As decision makers, we often have a variety of competing goals and interests. When goals and interests collide, we choose how to balance the competing goals, such as by prioritizing one goal over others or revising one or more goals. How we approach this conflict involves determinations about the acceptability of various trade-offs.

Let’s consider a simple example of such a possible conflict. Do you speed (and put yourself at risk of getting a ticket or into an accident) to make up for lost time when you’re running late? For many people, the answer to this question is likely to be a function of the various characteristics of that situation. How late am I? How important is it to be on time? Where am I going? How much traffic is there?

How we answer these questions, and the relative weight we attach to those answers, will likely determine how we proceed — whether we decide to drive as fast as needed to make it on time, play it safe and drive our normal speed even if we’re late, or perhaps drive a little faster than normal to try not to be excessively late.

If you accept that people are likely to reach different conclusions about how to address this issue and that for many people, those conclusions are influenced by their answers to the kinds of questions I posed, you now have the basic understanding of motivated reasoning. That is, we reason our way into a conclusion – it doesn’t have to be an erroneous conclusion, just that we reason our way to a conclusion that is in some way connected to our goals. How we prioritize and balance conflicting goals will often determine the reasoning we use.

Some Caveats to Consider

Based on the above, it might sound like we can reason our way into almost any conclusion. This, though, wouldn’t be entirely accurate. There are at least three specific factors that increase our propensity to reason our way into a particular conclusion.

First, the scenario involves competing interests. Technically, wanting to reach an accurate conclusion and being motivated to do so is a form of motivated reasoning[4]. This scenario, though, is not where motivated reasoning presents problems. Instead, the challenge occurs when there is a conflict between reaching an accurate conclusion and reaching an erroneous conclusion that is consistent with other goals or values (as was the case with Angel Hernandez). Alternative goals/values could include protecting one’s sense of self, wanting to make a quick or effortless decision, or wanting to believe claims made by people with whom we most closely identify (e.g., tribalism).

There are also times when one conclusion isn’t necessarily more accurate than another, such as in deciding how to resolve a scheduling conflict (e.g., determining whether to keep a scheduled doctor’s appointment or go to lunch with a friend you haven’t seen in a long time). In such situations, there’s usually conflict among goals or values (as in the example above about the possible conflict between driving safely and being late).

Second, there has to be a fairly strong bias toward one of the possible conclusions. This becomes a potential problem in situations where the biased conclusion is inaccurate, violates an existing law/ethical rule, or could potentially harm us or others. The bias could stem from several sources, such as strongly held but erroneous beliefs about which conclusion is more accurate (the result of faulty information), strong emotions (which steer us toward conclusions consistent with those emotions), or self-serving motivations that prioritize our needs/preferences ahead of other criteria (e.g., what is ethical/moral). The stronger the bias, the greater the likelihood we’ll make a decision that is in line with the bias.

Lastly, we have to be able to construct “a justification of [our] desired conclusion that would persuade a dispassionate observer” (Kunda, 1990, pp. 482-483). That is, when the first two conditions are satisfied, we’re likely to try reasoning our way into an erroneous or self-serving conclusion, but “only if [we] can muster up the evidence needed to support [that reasoning]” (p. 483). This may cause us to base our decision on evidence that supports the biased conclusion, to discount evidence that refutes the desired conclusion, to overweight the benefits or underweight the costs of a preferred conclusion relative to alternative conclusions, or to rationalize the prioritization of one goal/value over another.

How to Address Motivated Reasoning

We engage in motivated reasoning regularly, and in many cases, it serves us quite well. If we weren’t motivated to make decisions that met our goals, that were consistent with our values, or fulfilled needs and desires, we’d end up stumbling through life without doing much of anything except by chance.

However, I suspect most of us have also made numerous decisions (maybe too many to count) in which a situation activated goals, values, needs, or desires that we prioritized, justified, and then acted on, leading to decisions with unintended consequences. Many flawed decisions people make (e.g., texting while driving, driving drunk, crimes of passion) and many accusations we make about others based on very limited information about them are the result of motivated reasoning. As Kunda (1990) argued, “illusory beliefs… can be dangerous when they are used to guide behavior and decisions, especially in those cases in which objective reasoning could facilitate more adaptive behavior” (pp. 495-496).

To counteract some of the dangers, we can do the following:

  • Avoid making decisions when experiencing intense emotions. Strong emotions can result in reasoning consistent with the emotion rather than consistent with what is in our long-term best interests.
  • Be attentive to situations where you are highly motivated to prove yourself right. The more intense our desire to vindicate our conclusion, the more likely we are to open ourselves up to reasoning errors.
  • Avoid jumping to conclusions based on what you think you know. This is especially important when it comes to interactions with others, where we often make assumptions that aren’t grounded in any situation-specific evidence[5].

It’s important to remember that motivated reasoning is part of human decision making. When goals/values conflict, the decision we end up making will involve trade-offs between those competing goals/values. However, we can avoid situations that are more likely to lead to erroneous or regretful decisions.

References

Footnotes

[1] To clarify, this concerns his lawsuit, not his calls on the baseball field.

[2] I provided a brief overview, but it was insufficient to really understand the concept of motivated reasoning more generally.

[3] For more information on the issues underlying Maslow’s Hierarchy and the presentation of it as a pyramid, see Winter (2015), Enright (2018), and Chan (2020).

[4] In fact, it would be ideal if the desire to reach an accurate conclusion always corresponded well with other goals and values. Unfortunately, this is not at all the case.

[5] This is an issue I wrote about when discussing the tension between science and lived experience.

advertisement
More from Matt Grawitch Ph.D.
More from Psychology Today