Skip to main content

Verified by Psychology Today

Anger

Could Artificial Intelligence Help Avert Mass Shootings?

AI has some advantages, but prevention may still require a human touch.

Key points

  • AI lacks the interpersonal element of human threat detection.
  • Some of the most valuable observations come from family, classmates, and co-workers.
  • AI can be used as one of many tools to help threat assessors detect dangerous people.

The aftermath of every mass shooting involves retrospective analysis of red flags. What was missed? How could the violence have been averted? What should we have seen? As I discuss in a prior article, artificial intelligence (AI) could be used to help threat assessors detect dangerous people. [i] But how good is it, and do the potential risks outweigh the benefits?

Image by Gerd Altmann from Pixabay
Image by Gerd Altmann from Pixabay

Multitasking: AI vs. Humans

AI can potentially monitor public areas for threats better than humans who are challenged to multitask over extended periods of time, are vulnerable to cognitive fatigue, or are physically unable to monitor multiple television screens simultaneously. Staffing issues exacerbate these human concerns.

True, the use of AI to monitor potential threats raises issues of privacy and potential bias. But using human threat assessors to screen for potentially dangerous people involves some of the same concerns. Even something as basic as facial recognition, which is currently in widespread use, has the potential to make biased assessments—whether performed by a person, a computer program, or both, considering that humans do the computer programming.

Another issue involves the extent to which AI can detect suspicious activity. How does it know what to look for? Would an activity be classified as suspicious merely because of the race, gender, or apparent religious affiliation of the actor?

Lacking human intuition and instinct, AI’s decision-making ability in this area again depends on its programming. In fact, some companies that already use AI to detect weapons through “inference algorithms,” raise concerns of racial profiling or targeting people legally able to carry guns. [ii] And generally, weapon-detection systems may be less effective when scanning a crowd of people as opposed to one-by-one such as occurs at a security checkpoint, not to mention associated costs and other practical considerations installing such technology.

Reviewing AI information may also be useful in identifying suspects and motives for violence. In the aftermath of tragedy, some research found that after mass shootings, participants sought out affection through chatbots in order to help them cope with stress and negative emotions. [iii] But because the goal is prevention, the question remains: Can we use AI to avert disaster in the first place?

The Value of Human Observation and Intervention

Preventing a mass shooting requires more than computerized analysis; it involves the observations of the people who are in the best position to notice red flags in terms of negative affect, expressed grievances, and behavioral changes. In this sense, averting a mass shooting requires knowledge and experience that AI doesn’t have, in terms of close personal acquaintance with an individual in crisis.

Colleagues and coworkers are in a good position to notice concerning behavior, and close friends and family members can compare it to a baseline—which may include how the suspect has dealt with stress, trauma, anger, or grievances in the past. Taken together, early observations can avert disaster through effective intervention. The key is prompting those around the suspect to speak up.

As we continue to examine the interplay between artificial intelligence and human judgment, wisdom, and knowledge, we continue to brainstorm—in every sense of the word—ways to work together to prevent violence before it occurs.

References

[i] https://www.psychologytoday.com/intl/blog/why-bad-looks-good/202306/can…

[ii] https://www.fierceelectronics.com/sensors/startups-are-using-ai-help-st….

[iii] Cheng, Yang, and Hua Jiang. 2020. “AI‐powered Mental Health Chatbots: Examining Users’ Motivations, Active Communicative Action and Engagement after Mass‐shooting Disasters.” Journal of Contingencies and Crisis Management 28 (3): 339–54. doi:10.1111/1468-5973.12319.

advertisement
More from Wendy L. Patrick, J.D., Ph.D.
More from Psychology Today