Google Funds Algorithm That Targets Internet Trolls For Banning

In today's modern world, the Internet is full of trolls: those antisocial Internet users who wish to do nothing more than disrupt conversation and make a nuisance of themselves.

However, it's hard not just to identify those users early on, but to keep up with the sheer amount of control they often have on forums, in comments and on social media.

Now, though, a new study funded by Google could provide a solution: an algorithm that can not only identify trolls, but also target them for banning.

The 18-month study, done by researchers at Cornell and Stanford, showed an 80 percent accuracy in identifying troll behavior early, allowing sites to weed out those users likely of trolling by analyzing certain online behaviors associated with such antisocial tendencies.

Most impressively, this study covered large websites often trolled, such as CNN's community of commenters, Breitbart.com and IGN.com.

Basically, researchers looked at behaviors associated with trolling, the kind of behavior that results in permanent bans from such websites. The first thing they noticed is that trolls' posts were usually of a lower quality than posts from normal users and often showed poor literacy skills. Such posts also usually include inciteful language, including negative words and profanity.

Trolls also posted considerably more than regular users. On CNN, those deemed as future banned users posted 264 times before being banned from the site compared to only 22 posts by a normal user in that same time period. Trolls also receive more replies, probably because they're good at luring users into pointless discussions.

The result of the study was an algorithm that can identify a troll after just 10 posts. The algorithm has an 80 percent accuracy of catching trolls before they can become a serious problem. However, the algorithm still needs tweaking because one out of five users chosen as potential trolls were actually not trolls.

"A more fine-grained labeling of users (perhaps through crowdsourcing), may reveal a greater range of behavior," writes the study's authors. "Similarly, covert instances of antisocial behavior (e.g., through deception) might be significantly different than overt inflammatory behavior (Hardaker 2013); some users might surreptitiously instigate arguments, while maintaining a normal appearance."

However, researchers noted that although their algorithm could prove useful in weeding out trolls, extreme action against such users can often make the situation worse (especially if someone gets banned that didn't deserve it). In such cases, the researchers suggest that "a better response may instead involve giving antisocial users a chance to redeem themselves."

[Photo Credit: EFF | Flickr]

Be sure to follow T-Lounge on Twitter and visit our Facebook page.

ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion
Real Time Analytics