A certain mental health helpline will no longer partner up with an AI customer support to share data following criticism of their relationship over sensitive information and conversations.
Crisis Text Line Ends Relationship with Loris.ai
A conversational AI platform called Loris.ai promised its AI-powered chat solution would help customer service representatives efficiently and effectively respond to customers based on their tone of voice.
During its partnership with Crisis Text Line, a mental health text helpline, Loris.ai developed AI systems to help customer service agents better understand the sentiment in chats through anonymized data collected.
But in a statement to the BBC, the mental health helpline said it ended its data-sharing relationship with Loris on 31 January and requested its data deletion.
CTL's vice president Shawn Rodriguez also spoke out to BBC and clarified that Loris.ai had not accessed any data since the beginning of 2020.
Moreover, Crisis Text Line said they had listened to complaints from the community about the relationship, and one CTL board member tweeted they had been "wrong" in agreeing to it.
These concerns raised by the community followed after a report from Politico citing the ethical stand of collecting sensitive data from a conversation with a suicide hotline.
In defense, Loris.ai added that using the insight gleaned from studying nearly 200 million texts, the AI's origins stem from Crisis Text Line itself, where challenging conversations are critical.
On the other hand, CTL further ensures that all data shared is fully anonymized and stripped of identifying information. It has been transparent with its users about sharing data, right down to its terms and conditions.
CTL's Vice president Rodriguez emphasizes that data and artificial intelligence remain integral to CTL's assistance to people in need of mental health support after 6.7 million conversations with people.
Identifying at-risk individuals was done using data and getting them as fast help as possible, he said.
"And, data is used successfully to de-escalate tens of thousands of texters in crisis experiencing suicidal ideation," Rodriguez added.
The Criticisms from the Partnership
Nevertheless, Politico spoke to several experts who were highly critical of the said partnership in the first place. One questioned whether or not those with mental health issues could fully consent to share data.
"CTL may have legal consent, but do they have actual meaningful, emotional, fully understood consent?" Jennifer King, privacy and data policy from Stanford University's AI Institute, told Politico.
And as recalled, CTL itself also made it clear that they felt the partnership initially felt wrong, so they have made proper steps to make things right.
In response, CTL wrote in a statement on its website that they heard the feedback from the community, and it's loud and clear that anyone in crisis should be able to understand what they are consenting to when reaching out for help.
This article is owned by Tech Times
Written by Thea Felicity