Skip to main content

Verified by Psychology Today

Suicide

Can Artificial Intelligence Predict Suicide?

Research: Using machine-learning to neuroscientifically infer who means to die.

In "Machine learning of neural representations of suicide and emotion concepts identifies suicidal youth" (Just et al., 2017) describe their research in using artificial intelligence models to use brain imaging to predict who may be most likely to try to end their lives.

They note:

"The assessment of suicide risk is among the most challenging problems facing mental health clinicians, as suicide is the second-leading cause of death among young adults. Furthermore, predictions by both clinicians and patients of future suicide risk have been shown to be relatively poor predictors of future suicide attempt. In addition, suicidal patients may disguise their suicidal intent as part of their suicidal planning or to avoid more restrictive care. Nearly 80% of patients who die by suicide deny suicidal ideation in their last contact with a mental healthcare professional. This status identifies a compelling need to develop markers of suicide risk that do not rely on self-report. Biologically based markers of altered conceptual representations have the potential to complement and improve the accuracy of clinical risk assessment."

Predicting aggression is a notoriously important and difficult task for mental health professionals. When it comes to aggression directed against oneself, the ability to tell who is at most risk so appropriate interventions can be made is uncertain, at best. Suicide takes a terrible toll on most families and can affect survivors across generations.

Generally, mental health professionals, especially psychiatrists, are trained to assess risk based on a variety of factors including past personal and family history, and current risk factors such as insomnia, agitation, and the presence of well-formulated plans and preparations, among others. Given that over an estimated 44,000 people complete suicide per year in the U.S. as of a few years ago, it is a public health crisis. Suicide is one of the biggest killers of adolescents and young adults, and rates are rising, as noted by the NIMH.

Suicidal thinking is not itself uncommon and is not always associated with psychiatric illness. However, suicidal thoughts and behaviors may occur in many psychiatric conditions including major depression, bipolar depression, developmental trauma, substance use disorders, anxiety disorders, psychotic disorders, and personality disorders (notably borderline personality disorder), to name a few. Suicidality may be chronic and associated with lower risk, but sometimes risk peaks, creating an immediate threat to life and limb.

There are various approaches to addressing suicidal thinking and self-destructive behavior, ranging in general (and they are not mutually-exclusive) from insight-oriented approaches geared to understand where the suicidality comes from and what it means, to approaches looking at the psychological function of suicidal thinking (e.g. self-directed aggression, relief/escape, communicative, etc.), to therapies designed to reduce the frequency and intensity of suicidality and prevent self-harming behaviors. Making a decision to recommend or mandate emergency attention for suicidal risk is a very serious decision, and while implementing emergency care can be life-saving, it can also entail involuntary treatment and care in chaotic environments (e.g. emergency departments and inpatient psychiatric units) which can be, for some, counter-therapeutic.

Given these considerations, developing tools to better predict suicidal risk is necessary and long overdue. Advances in neuroimaging and computation, however, are beginning to make it possible to analyze brain activity, and use machine learning models to determine who is likely to harm themselves and who is not, leading to better clinical decision-making and treatment approaches.

In their study, Just and colleagues compared patients with suicidal ideation against a control group without suicidality. In their study, they seek to answer three questions:

  1. Do study subjects with differ from controls in terms of neural representations of death-related and suicide-related concepts, such that a computer can consistently tell the difference from imaging?
  2. Can a machine-learning model tell the difference between people who have attempted suicide and those who have not?
  3. Are there different emotional signatures between subjects and controls which would allow a computer to tell whether someone is a member of the suicidal ideation group or the control group? [This last question is closely related to the question of whether a computer-based analysis could predict who is likely to try to harm her or himself].

To this end, researchers recruited 79 young adults who either currently were experiencing suicidal ideation or were controls with no personal or family psychiatric history. They used several suicide-related instruments, and assessed for depression, anxiety, childhood trauma, and other psychiatric conditions using clinician evaluation and validated rating scales. They used functional magnetic resonance imaging (fMRI) to analyze brain activity in relation to a 30 concept framework about suicide thinking.

Just et al., 2017
30 Stimulus Concepts
Source: Just et al., 2017

During the scan, subjects were presented with three groups of ten words related to either 1) suicide (e.g. "death," "overdose"), 2) negative emotion (e.g. "sad," "gloom") and 3) positive emotion (e.g. "happy," "carefree"). They were asked to actively consider the ideas presented to them in detail, as the words were presented in a pattern designed to provide sufficient time and variation to allow for an accurate analysis of underlying brain activity.

The resultant data were analyzed by 1) selecting areas of fMRI data ("voxels") with stable meaning-related representations ("stable semantic tuning curves") across the word presentations, 2) determining how those semantic voxels cluster into larger stable groups to indicate the anatomic location of the neural semantic representation for given concepts and 3) using machine-learning methods to train a computer model (artificial intelligence) to learn how to tell different semantic patterns apart based on the semantic data from neural representations about suicidality, positive emotion and negative emotion between subjects and controls.

They looked at differences in underlying brain activity related to emotional signatures organized around concepts such as anger, sadness, shame, and pride, associated with highest classification accuracy when used as "neurosemantic signatures" to guide machine-learning models. For instance, in people with suicidal ideations who attempted suicide, the concept of death was significantly associated with lower levels of sadness than those with suicidal ideation who had not made an attempt, and likewise, the concept of "lifeless" was associated with greater anger in people who had attempted suicide. They used regression analysis to build a predictive model ("classification algorithm") to test on a different sample of subjects, to determine if it could tell those with suicidality (with and without a tendency to attempt suicide) from those without suicidality.

They found that the model they developed based on neural semantic representations on fMRI was able to accurately distinguish those with suicidal ideations from controls and furthermore was able to differentiate attempters from non-attempters among those with suicidal ideations.

Being able to differentiate between attempters and non-attempters is important clinically, and is also important because it shows that the underlying neurobiology of how we think about death, suicide, and various positive and negative emotional states is measurably and significantly distinct. The found that the specific concepts of "death," "cruelty," "trouble," "carefree," "good," and "praise" were different in people with suicidal ideation, and that among those with higher suicidal ideation in depression there was a cognitive triad comprising pessimistic views of oneself, the world, and the future. In attempters compared with non-attempters, the most alterations were found in the concepts of "death," "lifeless," and "carefree."

While the sample size was relatively small, and the technology is far from being a widely-available clinical tool, and requires testing with other populations to see if these results will not only be repeated with a broader range of ages and psychiatric conditions, this study and the tool the authors developed shows great promise as proof-of-concept that machine-learning can be used to analyze neural representations of meaning to develop predictive models of behavior. While this type of work is of great interest for detecting and preventing suicide, the application of artificial intelligence to decode meaning and intent in the human mind will have far-reaching impact beyond medicine, into legal and criminal evaluation, social science, economics and human social reality, as mind and machine continue to merge and co-evolve.

References

Just, M. A., Pan, L., Cherkassky, V. L., McMakin, D., L, Cha, C., Nock, M. K. & Brent, D. (2017).
Machine learning of neural representations of suicide and emotion concepts identifies
suicidal youth. Nature Human Behavior, October 30. doi:10.1038/s41562-017-0234-y

advertisement
More from Grant Hilary Brenner MD, DFAPA
More from Psychology Today