Skip to main content

Verified by Psychology Today

Bias

Explaining Our Extreme Safety Demands for Self-Driving Cars

Research shows cognitive biases cause public resistance to self-driving cars.

Key points

  • Cognitive biases may lead to unrealistic safety requirements from self-driving cars.
  • Most people require higher levels of safety before agreeing to a ride with a self-driving car than a human driver.
  • We tend to regard ourselves as safer drivers than we actually are.
  • The safer drivers we regard ourselves as being, the more safety we demand from self-driving cars.
Cottonbro/Pexels
Source: Cottonbro/Pexels

Self-driving cars promise to brighten our lives in multiple ways. Americans spend almost an hour daily, on average, commuting to and from work. While self-driving cars will not cut down on our time commuting, they will allow us to use that time in more productive or fun ways.

Self-driving cars can also ease the lives of people unable to drive on their own due to old age or disability.

Finally, research has shown that introducing self-driving cars, once they are at least 10 percent safer than the average driver, would spare hundreds of thousands of 1.25 million lives lost in traffic accidents each year.

Despite the advantage of allowing self-driving cars on the road as soon as they are somewhat safer than the average human, there is significant public resistance toward introducing self-driving cars, unless they are extremely safe compared to the average human driver (e.g., 90 percent safer than the average human driver). A new study finds that two cognitive fallacies play a significant role in explaining this reluctance toward self-driving cars.

Two Cognitive Fallacies

In a study published in the May 2021 issue of the journal Transportation Research, psychologist Azim Shariff and colleagues set out to examine the extent to which two well-known cognitive biases known as the illusory superiority bias and algorithm aversion, which result in fallacious reasoning, explain the public's unwillingness to accept a ride in a self-driving car unless it is extremely safe.

The illusory superiority bias—or what is also known as the better-than-average effect—is the tendency to perceive ourselves as significantly better drivers than the average driver.

The researchers predicted that because most of us tend to think of ourselves as better-than-average drivers, most of us will take ourselves to be disadvantaged if we were to accept a ride in a self-driving car that is only somewhat safer than the average driver.

Algorithm aversion is an excessive opposition to rely on algorithms even when those algorithms are equal or somewhat superior to humans.

Algorithm aversion has previously been demonstrated with respect to medical diagnoses. For example, when given the choice to receive a medical diagnosis by a human doctor or a computer, most people prefer to receive the diagnosis by the human, even if the computer has been shown to be more accurate than the human.

Shariff et al. predicted that algorithm aversion is a further factor in explaining people's reluctance to ride a self-driving car. Specifically, they predicted that people will be less willing to ride with a self-driving car than a human taxi driver with the same safety record.

Methodology

To identify the extent to which the better-than-average effect (or superiority illusion) and the algorithm aversion affect the public's resistance toward the adoption of self-driving cars, the researchers conducted three experiments.

In the two first experiments, participants were asked to indicate the minimum level of safety (on a movable scale) they would require to accept a ride with a ride-sharing company (such as Uber or Lyft). Half of the participants were told that the ride was provided by a self-driving car, whereas the other half were told that the ride was provided by a human driver.

The participants were furthermore asked to rate their own safety as a driver compared to other American drivers.

The two first experiments framed safety in different ways. In experiment 1, safety was framed in terms of the percentage of accidents eliminated (e.g., 10% means that the accident rate is 10% less than that of the average driver).

In experiment 2, safety was framed in terms of percentile ranking among US drivers (e.g., where the average human driver has a 1 in 600 lifetime chance of dying in a car crash, 10% safer means that you have a 1 in 660 lifetime chance of dying in a car crash.)

In experiment 3, participants were divided into two groups. The first group was asked the same questions as the participants in the second study. The second group was asked to rate their own safety as a driver compared to other American drivers but were informed about the existence of the illusory superiority bias.

As in experiments 1 and 2, the researchers asked half of the participants about the safety threshold they required for riding with a self-driving car and the other half about the safety threshold they required for riding with a human driver.

Results

The findings revealed that the psychological biases of illusory superiority and algorithm aversion play a significant role in explaining the public's resistance toward adopting self-driving cars unless they are extremely safe.

The two first experiments showed that regardless of gender, age, and education level, the majority of participants thought that if everyone drove as they did, 66 percent of accidents (experiment 1) or 76 percent of accidents (experiment 2) would be eliminated. The majority of respondents thus thought of their own driving ability as significantly better than average, as predicted by the illusory superiority bias.

The respondents with higher estimates of their own driving ability were found to require higher safety thresholds from self-driving cars, which supports the prediction that the further above average people perceive themselves, the higher the safety threshold they demand from a self-driving car.

The findings furthermore revealed that people were less willing to ride in a self-driving car than with a human taxi-driver with the same safety record (for 50%, 75%, and 90%).

Experiment 3 showed that alerting people to the pervasiveness of the illusory superiority bias caused people to reduce their ratings of their own safety as drivers by an average of 12 percent.

Alerting participants to people's general susceptibility to the illusory superiority bias furthermore lowered safety thresholds for driving with other human beings. But having this information made no difference to the level of safety they required of self-driving cars.

Practical Implications

The study thus shows that self-driving cars would need to be substantially more than 10 percent safer than the average driver for them to see widespread adoption.

As Shariff et al. point out, this finding is concerning. If we wait for self-driving cars to be 90 percent safer than the average driver rather than 10 percent safer before we adopt them, many more people will lose their lives in traffic accidents. Moreover, the time to a 90 percent increase in safety will be much longer than if we adopted self-driving cars when they were 10 percent safer and used the data to enhance safety.

References

Azim Shariff, Jean-François Bonnefon, Iyad Rahwan (2021). How safe is safe enough? Psychological mechanisms underlying extreme safety demands for self-driving cars. Transportation Research Part C: Emerging Technologies, 126, https://doi.org/10.1016/j.trc.2021.103069.

advertisement
More from Berit Brogaard D.M.Sci., Ph.D
More from Psychology Today