Human Rights Watch Slams Meta's 'Systemic' Censorship of Israel-Palestine Conflict

Report exposes 1,050 content removals favoring Israel on Facebook and Instagram.

Human Rights Watch recently accused Meta, the parent company of Facebook and Instagram, of "systemic" censorship during the ongoing Israel-Palestine conflict in a report released today.

The report sheds light on Meta's policies and practices, claiming that they have silenced pro-Palestine voices on its platforms, specifically Instagram and Facebook.

FRANCE-INTERNET-TECHNOLOGY-META
This illustration photograph taken on October 30, 2023, shows the Meta (former Facebook) logo on a smartphone in Mulhouse, eastern France. SEBASTIEN BOZON/AFP via Getty Images

The Extent of Censorship

The wave of increased censorship coincides with the outbreak of hostilities between Israeli forces and Palestinian armed groups in October.

According to the report, an estimated 1,200 people were killed in Israel, primarily as a result of a Hamas-led attack on October 7, while over 18,000 Palestinians were killed, primarily as a result of intense Israeli bombardment.

Between October and November 2023, Human Rights Watch documented over 1,050 cases of content removal and suppression on Instagram and Facebook.

A staggering 1,049 of these cases involved the removal of peaceful content in support of Palestine, while only one involved the removal of content in support of Israel. The cases came from over 60 countries and were mostly in English, demonstrating the global scale of this issue.

The report identifies six key patterns of excessive censorship, each recurring at least 100 times. These include the removal of posts, suspension or permanent disabling of accounts, restrictions on engagement with content, limitations on following or tagging other accounts, constraints on certain features, and the significant decrease in visibility known as "shadow banning."

Meta's Alleged Inconsistencies

Critically, the analysis pinpoints four systemic factors contributing to Meta's censorship. Flaws in Meta's policies, especially the Dangerous Organizations and Individuals (DOI) policy, are highlighted. The DOI policy, intended to combat violent missions, is criticized for its sweeping bans on vague categories of speech, impacting discussions around Israel and Palestine.

Inconsistent and opaque application of Meta's policies, apparent deference to government requests, and heavy reliance on automated tools for content removal are also identified as contributing factors. Over 300 cases documented instances where users were unable to appeal restrictions on their accounts, leaving them without access to an effective remedy.

The report places Meta's behavior in the context of historical overreach, with a documented record of broad crackdowns on content related to Palestine. Despite promises to address such issues, Meta's inconsistent enforcement and failure to meet human rights due diligence responsibilities are emphasized.

A History of Censorship

This is not the first time Meta's actions have faced scrutiny. In 2022, Business for Social Responsibility (BSR) found adverse human rights impacts on Palestinian users in a review of Meta's actions in May 2021. Meta's failure to deliver on commitments made after the review underscores ongoing concerns.

Human Rights Watch concludes the report with a series of recommendations for Meta, urging an overhaul of the DOI policy, improved transparency on government requests, and collaboration with civil society to set targets for implementing commitments.

The report calls for Meta to align its content moderation policies with international human rights standards and permit protected expression on its platforms.

Stay posted here at Tech Times.

Tech Times
(Photo: John Lopez)
Tech Times
ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion
Real Time Analytics