Algorithms are just as good as human examiners at identifying signs of mental health problems in texts

Transparenz: Redaktionell erstellt und geprüft.
Veröffentlicht am

UW Medicine researchers have found that algorithms are as good as trained human evaluators at identifying warning signs in text messages from people with serious mental illnesses. This opens a promising area of ​​study that could aid in mental health training and care shortages. The results were published in the journal Psychiatric Services at the end of September. Text messaging is increasingly part of mental health care and assessment, but these remote psychiatric interventions may lack the emotional reference points that therapists use to guide face-to-face conversations with patients. The research team, based in the Department of Psychiatry and Behavioral Sciences, used natural language processing for the first time to...

Forscher der UW Medicine haben herausgefunden, dass Algorithmen ebenso gut wie ausgebildete menschliche Bewerter darin sind, Warnhinweise in Textnachrichten von Menschen mit schweren psychischen Erkrankungen zu erkennen. Dies eröffnet einen vielversprechenden Studienbereich, der bei der Ausbildung in der Psychiatrie und bei Versorgungsknappheit helfen könnte. Die Ergebnisse wurden Ende September in der Fachzeitschrift Psychiatric Services veröffentlicht. Textnachrichten sind zunehmend Teil der psychischen Gesundheitsversorgung und -beurteilung, aber diesen psychiatrischen Ferninterventionen fehlen möglicherweise die emotionalen Bezugspunkte, die Therapeuten zur Steuerung persönlicher Gespräche mit Patienten verwenden. Das in der Abteilung für Psychiatrie und Verhaltenswissenschaften ansässige Forschungsteam nutzte erstmals die Verarbeitung natürlicher Sprache, um Textnachrichten …
UW Medicine researchers have found that algorithms are as good as trained human evaluators at identifying warning signs in text messages from people with serious mental illnesses. This opens a promising area of ​​study that could aid in mental health training and care shortages. The results were published in the journal Psychiatric Services at the end of September. Text messaging is increasingly part of mental health care and assessment, but these remote psychiatric interventions may lack the emotional reference points that therapists use to guide face-to-face conversations with patients. The research team, based in the Department of Psychiatry and Behavioral Sciences, used natural language processing for the first time to...

Algorithms are just as good as human examiners at identifying signs of mental health problems in texts

UW Medicine researchers have found that algorithms are as good as trained human evaluators at identifying warning signs in text messages from people with serious mental illnesses. This opens a promising area of ​​study that could aid in mental health training and care shortages.

The results were published in the journal Psychiatric Services at the end of September.

Text messaging is increasingly part of mental health care and assessment, but these remote psychiatric interventions may lack the emotional reference points that therapists use to guide face-to-face conversations with patients.

The research team, based in the Department of Psychiatry and Behavioral Sciences, used natural language processing for the first time to detect and identify text messages that reflect "cognitive biases" that might be missed by an undertrained or overworked clinician. The research could also ultimately help more patients find medical care.

When we meet people in person, we have all these different contexts. We have visual cues, we have auditory cues, things that aren't expressed in a text message. These are things we should lean on. The hope here is that technology can provide physicians with an additional tool to expand the information they rely on to make clinical decisions.”

Justin Tauscher, lead author of the article and current assistant professor at the University of Washington School of Medicine

The study examined thousands of unique and unsolicited text messages between 39 people with serious mental illness and a history of hospitalizations and their mental health providers. Human raters assessed the texts for various cognitive biases, as they normally would in the context of patient care. Evaluators look for subtle or overt language that suggests the patient is overgeneralizing, catastrophizing, or jumping to conclusions, all of which can be indicators of problems.

The researchers also programmed computers to perform the same task of evaluating text and found that humans and AI rated similarly in most categories studied.

“The ability to have systems in place that can support clinical decision making is, I think, extremely relevant and potentially impactful for those in the field who sometimes don't have access to training, sometimes don't have access to supervision, or sometimes are just tired, overworked and burned out.” “I have a hard time staying present with all the interactions they have,” said Tauscher, who came to research after a decade in a clinical setting.

Assisting doctors would be an immediate benefit, but researchers also see future applications that work alongside a wearable fitness band or a phone-based monitoring system. Dror Ben-Zeev, director of the UW Behavioral Research in Technology and Engineering Center and co-author of the paper, said the technology could eventually provide real-time feedback that would alert a therapist to impending problems.

"In the same way that you get blood oxygen levels, heart rate and other inputs," Ben-Zeev said, "we may get a note that indicates the patient is jumping to conclusions and acting catastrophically. Just the ability to do that." "Raising awareness of a thought pattern is something we envision in the future. People will have these feedback loops with their technology through which they gain insights about themselves."

This work was supported by the Garvey Institute for Brain Health Solutions at the University of Washington School of Medicine, the National Institute of Mental Health (R56-MH-109554), and the National Library of Medicine (T15-LM-007442).

Source:

UW Medicine

Reference:

Tauscher, J.S., et al. (2022) Automated detection of cognitive biases in text exchanges between doctors and people with serious mental illnesses. Psychiatric Services. doi.org/10.1176/appi.ps.202100692.

.