Mention strong words such as “death” or “praise” to someone who has suicidal thoughts and chances are the neurons in their brains activate in a totally different pattern than those of a non-suicidal person. That’s what researchers at University of Pittsburgh and Carnegie Mellon University discovered, and trained algorithms to distinguish, using data from fMRI brain scans.
The scientists published the findings of their small-scale study Monday in the journal Nature Human Behaviour. They hope to study a larger group of people and use the data to develop simple tests that doctors can use to more readily identify people at risk of suicide.
Suicide is the second-leading cause of death among young adults, according to the U.S. Centers for Disease Control and Prevention. But predicting a suicide attempt is challenging. Current methods rely on a patient self-reporting through a questionnaire or in an interview, which is often unreliable. Therapists interviewing a patient might miss the signs, or the patient is okay at the time of the interview but changes later, or a patient might lie, says David Brent, an endowed chair in suicide studies at University of Pittsburgh in Pennsylvania and a collaborator on the report. “The patient may have reason not to be truthful—they don’t want to be hospitalized,” he says. “All of those factors conspire against an accurate prediction.”
Brain scans, however, are quite telling, especially when analyzed with an algorithm, Brent and his colleagues discovered. “We’re trying to figure out what’s going on in somebody’s brain when they’re thinking about suicide,” says Brent.
These scans, taken using fMRI, or functional magnetic resonance imaging, show that strong words such as ‘death,’ ‘trouble,’ ‘carefree,’ and ‘praise,’ trigger different patterns of brain activity in people who are suicidal, compared with people who are not. That means that people at risk of suicide think about those concepts differently than everyone else—evidenced by the levels and patterns of brain activity, or neural signatures.
It required the use of machine learning algorithms (in this case Gaussian Naive Bayes classifiers) to identify those neural signatures. “It would be extremely difficult for a human being to discern a pattern of differences” between one group of people and another, which involves calculating activation levels of multiple elements in multiple areas of the brain, says Marcel Just, a professor of psychology at Carnegie Mellon University in Pittsburgh, who collaborated on the project. “I don’t know the exact amount of time for the classifier to do this,” Just adds, “but it is probably on the order of seconds.”
For the study, the researchers recruited 34 volunteers between the ages of 18 and 30—half of them at risk, and the other half not at risk, of suicide. They showed the participants a series of words related to positive and negative facets of life, or words related to suicide, and asked them to think about those words.
Then the researchers recorded, with fMRI, the cerebral blood flow in the volunteers as they thought about those words, and fed the data to the algorithms, indicating which volunteers were at risk of suicide and which weren’t. The algorithms then learned what the neural signatures in the brain of a suicidal person tend to look like.
Then they tested the algorithms by giving them new neural signatures to see how well they could predict, based on learning from other subjects, whether someone was suicidal or not. The classifier did it with 91% accuracy. Separately, the classifier was able to identify, with 94% accuracy, which volunteers had actually made an attempt at suicide, versus having only thought about it.
Putting someone in a fMRI machine to find out if they are suicidal is probably not practical, Just and Brent say. Instead, they hope to use the data to develop inexpensive tests or questionnaires that can assess suicide risk more reliably than current methods.
For example, this study linked certain emotions with suicide. “If those turn out to be reliable pairings...you could explore that with people and potentially decouple the emotion from suicide. Or you could use it to monitor their therapeutic progress and more precisely target psychotherapy,” Brent says. But first, they’ll have to conduct a larger study to confirm that those emotions and trigger words do reliably pair with suicide.
Other groups have also presented some promising approaches to suicide assessment that employ computer engineering technology. John Pestian at Cincinnati Children’s Hospital and Louis-Philippe Morency have developed technology that can assess voice quality—linguistic and acoustic patterns—that indicate suicide risk. And there have been a dozen or so projects that use machine learning to mine electronic health records to predict suicide. One of those techniques, developed by Colin Walsh at Vanderbilt University in Nashville, looked at medications, injuries and natural language in electronic health records retrospectively, and predicted which patients attempted suicide and when, within about a week of the event.
Marcel Just is working on a new study similar to the fMRI research, in which neural signals related to key suicide-related emotions are measured with EEG, or electroencephalography. The technology is far less expensive and more practical than trying to use fMRI to identify suicidal risk.
“It would be enormously helpful if that worked,” says Just. He is six months in to a two-year grant for the project, he says.