As Meta aims to beat fake news by laying out an anti-misinformation strategy for the Midterm Elections, it seems like it did not succeed. Twenty ads were submitted by researchers for testing that contains a threat to workers, with 15 of those approved by Facebook.
(Photo : Michael M. Santiago/Getty Images)
ATLANTA, GEORGIA - NOVEMBER 08: People wait in line to cast their ballot during the Midterm Elections at Fox Theatre on November 08, 2022 in Atlanta, Georgia. After months of candidates campaigning, Americans are voting in the midterm elections to decide close races across the nation. Residents in Georgia will be voting in the midterm elections for the Governors race between incumbent Gov.
Facebook's Automatic Moderation System During Midterm Elections
During the preparations for the Midterm Elections, Facebook stated that it would not be allowing content that threatens serious violence to all, not until researchers from New Yorks's Cybersecurity for Democracy and watchdog Global Witness submitted 20 test ads to know what Facebook will do with it.
These ads contain threats towards election workers that were described by New York Times as 'threatening to lynch, murder, execute,' that contain clear language for it to be easily detected.
Out of 20, 15 ads were approved by the platform ahead of last month's US elections. Before publishing the study, researchers already deleted the approved ads.
Ten ads were submitted in Spanish and six of those were approved. While the other ten contain the English language, with nine approved ads. The Register reported that the test ads were based on real examples against election workers that they modified for it to be readable.
Researchers explained, "We removed profanity from the death threats and corrected grammatical errors, as in a previous investigation Facebook initially rejected ads containing hate speech for these reasons and then accepted them for publication once we'd edited them."
This proves that depending on AI moderation might be risky for a huge platform to beat misinformation and hate speech toward everyone. This also risks ads that might not be caught until they are visible to the public, which is what the AI is supposed to do.
Meta's Statement
Meta's spokesman stated that the accounts that submitted the test ads were later disabled. They also clarified that once the ads go live, Facebook still continues to review these.
The spokesman added, "This is a small sample of ads that are not representative of what people see on our platforms. Content that incites violence against election workers or anyone else has no place on our apps, and recent reporting has made clear that Meta's ability to deal with these issues effectively exceeds that of other platforms."
Ad-Testing Other Platforms
Engadget reported that the researchers also investigated other platforms than Facebook. These include TikTok and YouTube, which both platforms stopped all threatening ads to be posted, followed by the banning of the accounts.
TikTok rolled out a feature that monitors violations like harmful deep fakes, incitement to violence, misinformation, and harassment of election workers. The Elections Center Feature is an effort for TikTok to election misinformation to the platform.
This was also a huge improvement for YouTube since the platform also experimented before during the election in Brazil. YouTube and Facebook allowed all election misinformation to be posted during an initial pass. Meanwhile, Facebook rejected 50% of follow-up submissions.
Related Article : Election Integrity will be Twitter's Top Priority Despite Mass Layoffs, Head of Safety Confirmed