It wouldn't be surprising to learn that YouTube's algorithm for automatically flagging videos has failed yet again. But this time, it has wrongfully flagged a video by none other Google, which owns YouTube. It's like a palace firing a cannon at itself.
This algorithm is known to be troublesome. Its most recent issues involve some content creators suddenly having their videos demonetized for no apparent reason, but this latest blunder takes the cake.
YouTube Fail
Here's what happened: Google posted an advertisement for its new Chromebook Pixel that got flagged because the algorithm thought it was spam, according to The Next Web. Google has since fixed the error, but because everything on the internet is permanent, screenshots and even a video of the mistake remain.
It's not exactly clear how YouTube's flagging algorithm works, but there have been some pretty infamous flubs because of it. Even more worrying is that it might actually be failing to determine what's actually inappropriate, what with traumatizing child-oriented videos managing to slip through the cracks, as a scathing report by The New York Times suggests.
It's a funny thing when its own algorithm takes down an official video from its parent company, yet this is also alarming as it is humorous. The fact that even official content from Google is getting removed tells of a more problematic issue. It's not that the algorithm is broken — but it's inefficient and perhaps even slightly ineffective.
Was It A Glitch Or Is YouTube's Flagging Algorithm Just Terrible?
Now, it's possible that this was just a minor glitch, but one can't help but think YouTube's algorithm is inherently flawed. It also speaks about how good — or how bad — Google is at moderating content on YouTube. Is machine learning to blame here? Maybe. Or Perhaps Google relying on it too much is simply the problem here.
Needless to say, the flagging algorithm on YouTube has plenty of room for improvement. While it's great that Google builds tools to actively weed out videos that are in violation of its policies, it needs to make sure that the system doesn't take down innocent videos by mistake. But if anything, this blunder suggests that machine learning, at its present state, still can't be relied on full-time. If Google itself isn't safe from a fault algorithm, then who is?
Thoughts about YouTube's flagging algorithm? Do you think this was simply a mistake or points to a more serious problem with Google's automatic flagging system? As always, feel free to sound off in the comments section below!