YouTube AI Misinterprets Chess Chat Involving 'Black' And 'White' Pieces, Flags for Racism

YouTube AI Misinterprets Chess Chat Involving 'Black' And 'White' Pieces, Flags for Racism
YouTube AI could have mistakenly blocked a channel of a chess streamer due to alleged racist remarks. George Becker from Pexels

It might be unbelievable at first that a YouTube algorithm has detected a chess discussion as 'racist' and flagged it for punishment. In the case of the chess YouTuber, he was blocked by the video-streaming company for the alleged, sensitive issue.

YouTube AI Could Have Mistakenly Perceived The Chess Chat As 'Racist'

The incident happened in June 2020 and people have entirely no idea why the algorithm blocked the streamer from making his content videos about chess. At the very least, it could have known that the video was flagged as 'harmful' and 'dangerous' content. Maybe, it recognized an amount of hate speech in the discussion forum if that was the case.

In a report by Daily Mail, the Croatian chess enthusiast, Antonio Radic, who is also known by his YouTube name "agadmator" was puzzled why he was barred from any activity within the video-sharing platform. Two researchers from the Carnegie Mellon University (CMU) had a wild guess about the mystery behind the confusion.

What's intriguing regarding the case was YouTube did not even explain why Radic's channel was shut down immediately. However, after 24 hours, it returned as if nothing happened. To solve the mind-boggling scenario, a project scientist said that it happened because Radic's viral interview with GM Hikaru Nakamura had detected words that sparked racism.

Ashique KhudaBukhsh of the Language Technologies Institute in CMU admitted that they have no clue what tool did YouTube use to detect a racist slur in the discussion. The video, however, mentioned "black" and "white" which were believed to be racist language.

Furthermore, he added that if the incident struck popular YouTubers like Radic, what more if the AI has been doing the same thing to other people who just stream for fun. KhudaBukhsh and his companion Rupak Sarkar, a research engineer ran two AI software that could detect hate speech to test its feasibility. Along with the test, they have discovered over 680,000 comments which came from five channels that were all about chess.

Moreover, out of nearly 700,000 comments, a simple random test was carried out for 1,000 sample comments. They found out that 82% of the comments did not tackle anything about hate speech. But, they have seen that words related to racism like white, threat, black, and attack could be the missing key to the AI's sudden action.

The AI could have a different way of filtering the messages through large samples and rest assured that the accuracy also varies based on the examples.

Comparing The Past Situation To What Happened To Radic

In a report by CMU News, KhudaBukhsh experienced encountering the sample problem before. His objective will be to recognize the 'active dogs' and the 'lazy dogs' in a group of photos. Majority of the pictures which have 'active dogs' contained grasses where the dogs ran. However, the program sometimes considered those photos that have grasses as samples of an 'active dog,' even in some cases, no dogs were detected in the photos.

What happened to Radic was just a mere resemblance of it. The training data sets were only a few even though the main topic was chess. As a result, wrong classifications of them happened.

This article is owned by Tech Times.

Written by Joen Coronel

ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion
Real Time Analytics