Can AI Improve Civility and Quality of Online Discussions? New Study Puts It to the Test

The study aims to combat online harassment and promote a more civil digital landscape.

Researchers at BYU and Duke University have embarked on a pioneering study to assess the potential of artificial intelligence (AI) in revolutionizing the quality and tone of online discussions. Their collaborative effort aims to combat online harassment and promote a more civil digital landscape.

The study facilitated through a specially designed online platform developed by BYU undergraduate Vin Howe, employed a unique approach. Participants with contrasting viewpoints were paired to engage in an online chat, addressing the contentious issue of gun control in American politics.

Can AI Improve Civility and Quality of Online Discussions? New Study Puts It to the Test
Researchers at BYU and Duke University have embarked on a pioneering study to assess the potential of AI in improving the civility of online discussions. Leon Neal/Getty Images

Can AI Ease Polarization?

Throughout the conversation, one participant would receive periodic prompts from an AI tool, offering suggestions to rephrase their messages in a more courteous and amicable manner, all without altering the core content.

Participants retained the autonomy to embrace, customize, or disregard the AI tool's recommendations. After the conversation, participants were directed to a survey to gauge the quality of the interaction.

A staggering 1,500 individuals participated in the experiment, resulting in a total of 2,742 AI-generated rephrasings being accepted. The findings unveiled a promising transformation in the dynamics of online interactions.

Chat partners of participants who integrated one or more AI rephrasing suggestions reported significantly enhanced conversation quality. Intriguingly, they also demonstrated an increased openness to consider the perspectives of their political adversaries.

David Wingate, co-author of the study and a computer science professor at BYU, emphasized that the AI-assisted rephrasings left the content and viewpoints of the conversations unaltered.

"We found the more often the rephrasings were used, the more likely participants were to feel like the conversation wasn't divisive and that they felt heard and understood," Wingate said in a press statement.

"But helping people have productive and courteous conversations is one positive outcome of AI," he added.

Countering the Toxic Online Culture

The research carries profound implications, offering a scalable solution to counter the toxic online culture. In contrast to conventional methods, such as limited training sessions led by expert moderators, AI intervention may hold the potential for widespread implementation across various digital platforms.

The study argues that by leveraging the capabilities of AI, online platforms have the potential to evolve into constructive forums. Here, individuals from various backgrounds and perspectives come together to engage in discussions about crucial matters with empathy and mutual respect.

This study ultimately emphasizes that when AI technology is thoughtfully integrated, it can significantly contribute to cultivating a more positive online environment.

"Though many are rightly concerned about the role of AI sowing social division, our findings suggest it can do the opposite-improve political conversations without manipulating participants' views," according to the study's abstract.

The findings of the research team were published in PNAS.

Byline
Tech Times
ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion
Real Time Analytics