Skip to main content

Verified by Psychology Today

Ethics and Morality

Origin of Evil

The easy ride from paper pushing to cruelty.

Wikimedia Commons
Source: Wikimedia Commons

The age-old question of whether evil is a psychological trait that lies dormant in all of us received renewed attention after the war. The World War II, Nuremberg War Criminal trials, which lasted from November 20, 1945 to Oct 1, 1946. A common defense given by former Nazis accused of involvement in the Holocaust was that they were just following orders from their superiors.

This was also the defense of Nazi SS-Obersturmbannführer Adolf Eichmann, who was in charge of the logistics involved in the mass deportation of Jews to ghettos and extermination camps in Nazi-occupied Eastern Europe during World War II. He had managed to escape when the war ended but was eventually captured on May 11, 1960 in Buenos Aires, Argentina.

German philosopher and political theorist Hannah Arendt, who witnessed parts of Eichmann’s trial in Jerusalem, Israel following his capture, argued that Eichmann was a plain bureaucrat, seeing himself as “a law-abiding citizen” who “did his duty” and “obeyed orders.” She called it “the banality of evil.”

At Yale, psychologist Stanley Milgram wanted to test whether it could really be true that if ordinary people are ordered to do evil things by an authoritative figure, they will do it.

In his experiment, which started in 1961, research participants were recruited via ads for a “study of memory.” At the site, an authoritative scientist in a white lab coat would tell the participant that the aim of the experiment was to test the effects of punishment on learning.

The participant was led to believe that he was randomly chosen to be the “teacher,” whereas another participant was randomly chosen to be the “learner.” In reality, the “other participant” was a confederate, an actor was pretending to be the participant who was to serve as the “learner” in the experiment,

The “teacher” was given a list of word pairs that he would teach the “learner.” After reading the word pairs once, he would read the first half of the word pairs and provide the “learner” with four options for the second word in the pair. The “learner” would indicate his answer by pressing one of four buttons corresponding to the four choices.

If the “learner’s” answer was incorrect, the “teacher” was asked to administer an electric shock, which was to increase by 15 volts for each wrong answer.

In each case, the “learner” was reminded that he was giving real shocks to the other participant. Furthermore, to ensure that he had a sense of how painful the shocks would be, he was given a real “sample” 45-volt electric shock from the electro-shock generator at the outset.

In reality, the confederate who pretended to be the “learner” didn’t receive any electric shocks but was merely pretending. At 135 volts the confederate would pretend to be in agony. At 300 volts he would bang on the wall and complain about his heart condition. After 300 volts, he would stop complaining and stop answering the questions.

Every one of the research participants stopped at some point and questioned the experiment or expressed a wish to check on the “learner.” But the authoritative scientist in the white lab coat would simply reply with: “The experiment requires that you continue. Please go on.” A significant number of participants continued after being encouraged to do so and some went all the way to 450 volts.

But what happens if there is no authority figure who requests ordinary people to do evil? Will they still do evil if they can get away with it?

The so-called Stanford prison experiment, which took place at Stanford University August 14-20, 1971, was aimed at testing this hypothesis. The study was funded by the US Office of Naval Research and was conducted by psychologist Philip Zimbardo.

Twenty-four male students were recruited for the study. They were assigned random roles as prisoners and guards in an enclosure in the basement of the psychology building at the university.

Both groups adapted their roles far beyond Zimbardo’s expectations. One-third of the guards became truly sadistic dictators, whereas many of the prisoners tolerated psychological abuse. For example, as a way of punishment, the guards would remove the prisoners’ mattresses and not allow them to empty the bucket in their cell that they used as a toilet.

What was most surprising perhaps was that in spite of the fact that everyone was aware that it was an experiment and not real life, the prisoners willingly abused and harassed fellow prisoners when requested to do so by the guards. Even when the prisoners had lost all monetary compensation and incentive to remain study participants, they didn’t quit the study (with the exception of one prisoner who quit early on).

Zimbardo shut down the experiment early owing to its questionable moral status. Zimbardo subsequently argued that since the roles of guards and prisoners were arbitrarily assigned, the actions were unlikely to be the result of the students’ personality traits but were situationally determined. Zimbardo called the hypothesis that good people turn evil when put in bad circumstances “The Lucifer Effect,” a hypothesis detailed in his 2007 bestseller by the same name.

Milgram’s and Zimbardo’s experiments are among the best known both inside and outside of psychology. Since they were first carried out, they have been taught to several generations of students in almost all disciplines. Almost anyone who has heard of the two experiments believes that these experiments prove that good people turn evil when put in circumstances that require them to be obedient or permit them to do evil, if there are no consequences.

However, subsequent studies have shown that the two experiments fail to provide any evidence for the hypothesis that we would all turn evil if we were thrown into sufficiently bad circumstances.

The first thing to note is that in Milgram’s experiment, only 65 percent of participants delivered the maximum voltage of electric shocks. While this constitutes a shocking majority, it falls short of being indicative of all people.

Thomas Blass, psychologist and author of The Man Who Shocked the World, conducted a meta-analysis of twenty-one versions of Milgram’s experiment to determine whether there were any significant differences in personality between participants who went all the way and those who quit before it got too far. Blass found that the volunteers who continued delivering the shocks were significantly more likely to be supporters of authoritative regimes than volunteers who refused to go all the way. They were also significantly more trusting of others and inclined to follow the lead of others.

If Milgram’s experiment reflects Eichmann’s situation in the Third Reich, then obedience to authority may indeed have been the driving force behind Eichmann’s willing participation in mass murder. However, a reassessment of the Stanford Prison Experiment and historical details about Eichmann’s life tell a more sinister story.

Psychologists Thomas Carnahan and Sam McFarlan conducted a study that looked at whether there are significant differences in the personality of people who sign up for “a psychological study of prison life,” like those in the original prison study, and people who sign up for a psychology study in general. They found that there were indeed shocking differences. People who volunteered for the “prison study” were on average:

  • 27 percent more likely to be hostile or aggressive.
  • 26 percent more likely to approve of and enforce social hierarchies.
  • 12 percent more likely to consider themselves superior to others and scorn others.
  • 10 percent more likely to be authoritative or submit to an authority.
  • 10 percent more likely to manipulate others for personal gain.
  • 7 percent less likely to show empathy.
  • 6 percent less likely to be considerate or altruistic.

Historians have uncovered evidence that Nazis like Eichmann were more akin to the participants in the Stanford Prison Experiments than the obedient 65 percent who delivered the fatal shocks in Milgram’s experiments.

In his book Becoming Eichmann, which challenges Hannah Arendt’s conclusions, historian David Cesarani writes:

Eichmann’s Nazi convictions and his unquestioning obedience to orders were part of the same ideological package. His prejudices and predilections had become so ingrained by 1945 that they surface in the marginalia he wrote in books while in Argentina [where he was hiding after the war]. But does this also mean that he had been a helpless marionette in the Narzi hierarchy? Was Eichmann merely a robotic subordinate, which was the image he tried to construct while in captivity? Again, he himself provides evidence to the contrary. He averred to [Willem] Sassen [who interviewed Eichmann about his involvement in the holocaust] that “I am an idealist.” Another time he told him: “When I reached the conclusion that it was necessary to do to the Jews what we did, I worked with the fanaticism a man can expect from himself.” Obedience can certainly coexist with zeal, but the self-denying portrait Eichmann gave of himself while on trial is belied by evidence of his aptitude for power games within the Third Reich and his willingness in 1944, at least, to play one power centre against another. (Becoming Eichmann, p. 361)

As Cesarani makes clear in his book, Eichmann wasn’t evil or even particular anti-Semitic when he first joined the Nazi party. Eichmann became evil simply by being a member of the Nazi party. That may sound strange but it’s well-documented that like-minded individuals debating a topic will tend to adopt more and more extreme viewpoints. Their ultimate viewpoint will often be far more radical that any of the viewpoints held before deliberation. This phenomenon is also known as spontaneous group polarization.

advertisement
More from Berit Brogaard D.M.Sci., Ph.D
More from Psychology Today