A new deep learning algorithm that can detect signs of depression from a person's speech has recently been developed by researchers at the Jinhua Advanced Research Institute and Harbin University of Science and Technology, MedicalXpress reports.
This new artificial intelligence technology could be a significant step toward reducing the number of suicides and other complications by making it easier for medical professionals to identify patients who require assistance with their mental health.
AI to Tackle Depression
AI tools are quickly making their way into healthcare, and computer scientists are looking into how these tools might be able to spot signs of physical and mental illnesses.
One of the most widespread psychiatric disorders is depression. According to data from the US Centers for Disease Control and Prevention (CDC), about 1 in 6 adults will experience depression at some point in their lives.
Each year, an additional 16 million American adults suffer from depression as a result of this. Anybody can experience depression, and it can strike at any age or in any type of person.
Meanwhile, the WHO notes that depression is a common illness worldwide, affecting an estimated 3.8% of the population, including 5.0% of adults and 5.7% of adults over the age of 60. Depression affects approximately 280 million people worldwide.
Read Also : Will Google Bard, Microsoft Bing ChatGPT End SEO? Here's How They Could Change Marketing
As an answer to this sobering reality, a new deep-learning algorithm that can detect signs of depression from a person's speech has recently been developed by researchers.
How the Algorithm Works
The researchers used the DAIC-WOZ dataset, a collection of audio and 3D facial expressions of patients with and without depression, to train their deep-learning model.
Researchers Han Tian, Zhang Zhu, and Xu Jing stated in their paper that "a multi-information joint decision algorithm model is established by means of emotion recognition." The model is used to analyze the subjects' representative data and to help determine whether the subjects are depressed or not.
A virtual agent asked the interviewee about their life and mood and recorded their voice and facial expressions as they answered questions.
Using OpenSmile, an open-source tool for interpreting speech and music, the researchers took important parts of the audio recordings and put them through principal component analysis.
The deep learning algorithm worked well in tests. It was able to spot depression in 87% of male patients and in 87.5% of female patients.
This encouraging result might serve as a catalyst for the creation of similar AI tools for spotting symptoms of other psychiatric disorders in speech, which would be a helpful addition to the toolkit for human psychiatrists and other medical professionals.
The creation of this deep learning algorithm could be a big step forward in the fight against depression and other mental illnesses.
By letting doctors make more accurate and earlier diagnoses, this technology could help people in need and lower the number of people who kill themselves.
Stay posted here at Tech Times.