A day ago, the FBI warned the public regarding fake videos related to Election Day. This is growing alarming since it's harder to distinguish which is real and fake nowadays.
Now, the social media giants Meta, TikTok, X (formerly Twitter), and YouTube - are coming under increasing pressure as they prepare to fight misinformation campaigns aided by artificial intelligence. During the 2016 Presidential Election, they were put to the test in handling misinformation.
Meta Invests $20 Billion in Elections Security
Since 2016, Meta has spent more than $20 billion on safety and security measures regarding elections, like reducing political content on Instagram and Threads, CNBC writes.
When it comes to fighting misinformation, Meta has taken steps to work with fact-checkers, promote voting resources from credible sources, and identify AI-generated content. Such investments speak for the power of Meta's commitment to securing a safe platform for the elections.
The company has over 40,000 employees focused on election security and is working with 11 independent fact-checkers, including PolitiFact and USA Today. However, Meta is no longer working with The Associated Press after a previous agreement ended.
In addition, Meta is taking steps in the form of in-app notifications and a Voting Information Center to ensure users have accurate voting information. Users can find this information by searching for election-related content.
Meta has been pretty strict in its policies on misinformation about voting and election interference. The company has also removed content that violates such policies. Moreover, Meta is adding visible and invisible watermarks to AI-generated content so that it is distinguishable from the original posts.
If the AI-generated content is highly deceptive, then an additional warning label will be applied to it.
Election Preparedness Efforts by TikTok
TikTok has a lineup of more than $2 billion dollars in trust and safety this year, with some specifically aimed at election integrity. The social media app works with the nonprofit Democracy Works through its U.S. Election Center to give verified information on voting to the users of the site. Views at the Election Center hit 7 million by September 4, showing it is highly used as an election-related destination.
TikTok is working with independent fact-checkers, marking every unverified information, and also trains its own misinformation moderators on the specific signs of fake content. It has also banned political advertisements and AI-generated content designed to manipulate users while asking creators to mark realistic AI-generated content properly.
TikTok has more measures to counter covert influence operations by banning over 3,000 accounts reportedly linked to political discourse manipulation early in 2023.
As the ownership of TikTok is Chinese, the law signed by President Biden in April 2023 has put a restriction on ByteDance to divest the app within a set timeframe. TikTok has challenged the law, and its future in the U.S. remains uncertain.
X: The New Election Integrity Strategy of the Platform
Before the elections, X had been collaborating with electoral officials, law enforcement, and other security agencies to strengthen its platform. The safety team has also been proactive in tracking and disrupting the spread of false accounts, particularly influencing operations-related ones. The platform also relies on Community Notes— a feature allowing users to add context to potentially misleading posts.
Since Elon Musk acquired X in 2022, the company has cut massive numbers of staff on this company, including trust and safety. By January 2023, there were only a few full-time workers in this division. According to X, the company continues to apply content policies while updating them to control emerging risks.
X allows political advertising but prohibits activities that could potentially manipulate civic processes, including encouraging citizens not to vote or misleading them on how to vote.
YouTube's More Sophisticated AI-Generated Content and Election Information End
YouTube has moved in various ways toward election integrity, including fighting AI-generated misinformation and promoting credible content. The platform now also lists authoritative news resources on voting on its homepage and places panels with candidate information above the search results for election-related topics.
YouTube will halt political ads in the final hour of Election Day, pause them on the closing of the last polls, and post real-time election results via Google links when the polls close. Google also enforces guidelines in limiting election-related content containing misinformation, conspiracy theories, or violent action by giving labels to AI-generated content as well as, in a few cases, removing such posts that violate its policy.
It teams up with Google's Threat Analysis Group to block interference coming from foreign adversaries. And while doing this, it strives to make the source a trustworthy place for election information for the users.
Snap, Reddit, Among Other Election Initiatives
Snapchat and Reddit are also joining the squad on Election Day. Snap is providing voter registration and registration reminders directly to users through an app with Vote.org, while Reddit is educating its users on voting and providing trusted sources for election information. The social media platform also includes nonpartisan AMA sessions by experts that are accurate.
Both networks permit political advertising but have implemented very strict vetting processes. Reddit bans AI-generated content geared toward deceiving users, while Snap has partnered with fact-checkers to ensure the accuracy of political ads.