Meta, the parent company of Facebook, has announced significant changes to its policies regarding digitally altered media, particularly in light of the upcoming US elections.
The latest announcement comes amid concerns over the spread of misleading content, including deepfake videos and AI-generated material.
Meta to Start Labeling AI-Generated Content
(Photo : SEBASTIEN BOZON/AFP via Getty Images)
This picture taken on March 25, 2024, shows the Meta (former Facebook) logo on a smartphone in Mulhouse, eastern France.
Starting in May, Meta will introduce "Made with AI" labels for AI-generated videos, images, and audio posted on its platforms, including Facebook, Instagram, and Threads. This expansion of the labeling policy aims to provide users with more transparency about the origin of such content.
Additionally, Meta will implement separate and more prominent labels for digitally altered media that pose a high risk of deceiving the public. This marks a shift in approach, moving from outright removal of such content to keeping it up while providing viewers with information on how it was created.
The decision follows criticism from Meta's oversight board, which described the existing rules on manipulated media as "too narrow." The board highlighted the need for a broader but less restrictive" approach to addressing manipulated content, including videos altered without the use of AI.
(Photo : TOBIAS SCHWARZ/AFP via Getty Images)
Meta, Facebook's parent company, introduces "Made with AI" labels for digitally altered media, addressing concerns over misleading content ahead of the US elections.
Meta's Move to Address Manipulated Content
Speaking with Reuters, Monika Bickert, Vice President of Content Policy at Meta, noted that the changes are based on feedback from the oversight board and extensive consultations with experts and the public. Bickert emphasized the importance of transparency and context in combating manipulated media.
Meta's decision also reflects broader concerns about the impact of AI technologies on online discourse, particularly in the context of elections. With political campaigns increasingly utilizing AI tools, there is a growing need for platforms like Meta to effectively police deceptive content.
The decision comes after extensive consultation with stakeholders globally, including over 23,000 respondents in 13 countries, the majority of whom expressed support for warning labels on AI-generated content.
Public opinion research conducted by Meta found strong support for warning labels on AI-generated content. A large majority of respondents favored such labels, particularly for content depicting individuals saying things they did not actually say.
The company's updated policy will continue to enforce community standards, removing content that violates guidelines on voter interference, harassment, violence, and incitement. Fact-checking processes will also remain in place to identify and demote false or altered content.
Meta's revised approach to deceptive content aims to strike a balance between freedom of expression and the need to protect users from harmful misinformation. By providing users with more information about the origin and nature of the content, the company hopes to empower them to make informed decisions while navigating its platforms.
Stay posted here at Tech Times.
Related Article : Meta Ventures Into AI Chatbot Territory, Plans to Develop New Model Far Superior to GPT-4