Skip to main content

Verified by Psychology Today

Defense Mechanisms

When Moral Reasoning Isn’t Just Rationalization

We're only human.

Key points

  • Some researchers say reasoning helps people virtue-signal rather than actually become virtuous. 
  • Reasoning most often serves as rationalization when people invoke principles.
  • Consistency reasoning regularly shapes moral attitudes and behavior.
  • Consistency reasoning has led people to donate to charity and become vegetarians or vegans.

Coauthored by Victor Kumar and Richmond Campbell

Many scientists are cynical about moral reasoning. They claim that humans do not reason about right and wrong to improve their moral perspectives, they do so to justify themselves to others. Reasoning helps people virtue-signal rather than actually become virtuous.

Consider psychologist Jonathan Haidt, who argues that moral judgment is almost exclusively driven by intuitions. When people offer reasons for a moral opinion, they may sincerely believe that they are explaining what caused them to hold it. But what they are really doing, usually, is “post-hoc rationalization,” offering after-the-fact justifications for their opinions. As Haidt puts it, people are not like judges who weigh evidence and reasons to form moral opinions. They are more like lawyers who can find arguments for whatever opinion they happen to hold.

Haidt has a point. People do engage in rationalization. However, as we have argued, moral reasoning does regularly shape attitudes and behavior. To see how we need to combine a psychological perspective with a philosophical one.

Philosophers tend to have a more optimistic view of moral reasoning than scientists. Plato, for example, famously thought that reason could control emotion. However, reasoning in moral philosophy tends to be fairly esoteric. It’s a safe bet that many philosophical arguments don’t have much impact on most people’s beliefs.

Yet, there are some philosophical arguments that have demonstrably shaped attitudes and behavior. Peter Singer, for one, has written essays and books that have led people to donate large portions of their income to charity. In one popular argument, Singer asks you to imagine discovering a toddler who has wandered away from her parents and into a pond. She’ll drown if you don’t immediately walk in and rescue her. (The pond is shallow, so there’s no risk you’ll drown yourself.) There’s no time to remove your fancy new outfit, but no matter. A child’s life is obviously worth much more than your clothing. Your obligation is clear: You must act.

But then Singer asks: What’s the difference between this drowning child and a starving child in the developing world? In each case, you can save a child’s life for about the same cost. If you’re obligated to save the drowning child then it seems that you’re also obligated to save a starving child. The next time you decline to give to an effective charity in the developing world, it’s as though you’re choosing to let a child drown.

Do you feel the pull? If so, does that mean that moral reasoning is psychologically powerful after all? It depends.

Reasoning most often serves as rationalization when people invoke principles. That’s because moral principles, such as Singer’s principle that we should help someone when we don’t have to give up anything of comparable value, are notoriously flexible. By themselves, they leave much open to interpretation. In addition, multiple principles usually bear on any given case; you can appeal to one principle and conveniently ignore other potentially relevant ones. These are some of the reasons that scientists like Haidt are legitimately skeptical about “principle reasoning.”

What makes arguments like Singer’s powerful is that they elicit a different form of moral reasoning. “Consistency reasoning” does not decide which principles apply to any given case and which do not. Instead, it identifies a firm intuition about one case and extends it to other cases. Consistency reasoning has a powerful effect on thought and behavior.

There’s evidence that people actually change their beliefs in response to consistency reasoning, including experimental research involving famous trolley cases. Most people are willing to pull a switch to save five lives at the cost of one. But most are not willing to push one person off a footbridge to save five others. And if they hear the push case first, then they are much less willing to endorse pulling the switch. Why? The reason: If sacrificing one for the sake of five is wrong in one case, then it’s probably wrong in another.

Consistency reasoning is persuasive only if there are no relevant differences between cases. Of course, there are many differences between the actual case of a starving child and the hypothetical case of a drowning child. But Singer’s argument has been so influential because these differences do not strike people as morally relevant. One child is nearby, the other far away, but so what? Distance doesn’t matter, morally speaking.

Learning how to engage in consistency reasoning is an important step in moral education. When a child hurts someone, parents often ask: “How would you like it if he did that to you?” They invite the child to put herself into another’s shoes. Hopefully, she sees that simply being a different person is not a morally relevant difference.

Or consider how consistency reasoning has driven many people to become vegetarians or vegans. Most people believe that it would be wrong to support a system that tortures cats and dogs. But if that’s true, then why isn’t similarly wrong to support the torture of farm animals? Arguably, some animals have greater moral significance than others if they are cognitively more sophisticated or have a richer capacity to feel pleasure or pain. But by this measure, pigs have the edge over pets. Therefore, eating bacon from a factory farm is no better than eating a dog after subjecting it to months of torture.

Maybe you feel the pull of this example of consistency reasoning but want to resist the conclusion about eating meat? We’re open. Give us an argument.

References

Excerpted/adapted from A Better Ape: The Evolution of the Moral Mind and How It Made Us Human

advertisement
More from Psychology Today