Meta's Oversight Board Targets Deepfake Porn, Focuses on 2 Specific Cases

This move follows recent incidents involving the dissemination of AI-generated explicit images.

Meta's oversight board has announced a reexamination of the social media giant's policies regarding deepfake porn, focusing on two specific cases.

This move by what is often called Meta's "supreme court" for content moderation disputes follows recent incidents involving the dissemination of AI-generated explicit images of celebrities.

RUSSIA-US-FACEBOOK-INTERNET-COMPUTERS
This photograph taken on October 28, 2021 shows the META logo on a laptop screen in Moscow as Facebook chief Mark Zuckerberg announced the parent company's name is being changed to "Meta" to represent a future beyond just its troubled social network. KIRILL KUDRYAVTSEV/AFP via Getty Images

Meta Tackes AI-generated Explicit Images

The Meta board has chosen to assess two cases related to images shared on Instagram and Facebook, aiming to evaluate the effectiveness of Meta's policies and enforcement practices in addressing explicit AI-generated imagery.

While the Board has the authority to make recommendations regarding Meta's deepfake porn policies, any actual changes lie within the purview of the tech firm itself.

The Board prioritizes cases with potential global impact, critical importance to public discourse, or those that raise significant questions about Meta's policies.

The cases under consideration are the explicit AI images of female public figures and user appeals regarding content removal from Instagram and content restoration on Facebook.

These cases involve decisions made by Meta on Instagram and Facebook, which the Oversight Board intends to address together. For each case, the Board will determine whether the content should be allowed on the respective platform.

AI-Generated Nude Woman on Instagram

The first case revolves around an AI-generated image of a nude woman posted on Instagram, resembling a public figure from India.

Despite user reports of pornography, Meta's automated system closed the reports without review. Subsequent appeals led to the Board's intervention, removing the content for violating community standards.

The second case focuses on an image featuring an AI-generated nude woman with a man groping her breast, posted on a Facebook group for AI creations.

Meta's policy experts upheld its initial decision to remove the content, citing violations of the Bullying and Harassment policy. Despite an appeal, the content remained removed.

The Board's selection of these cases aims to assess Meta's policies and enforcement practices regarding explicit AI-generated imagery, aligning with its Gender strategic priority.

Public comments are invited to address the harms posed by deepfake pornography, especially towards women, including public figures.

The public is also advised to comment on the global prevalence and usage of deepfake pornography, particularly in the United States and India.

Strategies for Meta to combat deepfake pornography, including effective policies and enforcement processes, are also being encouraged.

Meta's enforcement of rules against derogatory sexualized content, including the use of automated systems and challenges associated with automated systems for content moderation, is also up for public comment.

The Oversight Board may issue policy recommendations to Meta, to which the company must respond within 60 days. Therefore, the Board welcomes relevant public comments proposing recommendations for these cases.

Byline
Tech Times
ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion
Real Time Analytics