Meta's COVID-19 policies may appear effective in eradicating misinformation about this health issue, but the latest study suggests that it's not what it looks like.
With the prevalence of anti-vaxxers who promote COVID myths and unsafe practices on how to prevent getting infected, Facebook needs to deploy stronger methods to address this concern.
COVID-19 Misinformation Still Bypasses Facebook's Policies
The tireless battle against harmful misinformation online remains an exhausting yet critical task. Major platforms, like Meta, have made substantial efforts to combat the spread of false information, particularly concerning Covid-19 vaccines. But the lingering question is, did these endeavors yield the desired results?
A recent study, published in Science Advances, casts doubt on the efficacy of Meta's Covid-19 policies.
While Meta's decision to remove certain content did lead to a decrease in overall anti-vaccine content on Facebook, the study suggests that engagement with such content might have merely shifted rather than diminished.
The study, using data from CrowdTangle, tracked content from public pages and groups categorized as "pro" and "anti" vaccine sources.
Alarmingly, the data reveals that anti-vaccine influencers possess a knack for evading enforcement across Facebook's infrastructure levels. They leverage Facebook's content amplification mechanisms and extensive inter-platform networks to ensure their followers continue to access their content.
Anti-Vaccine Posts on Facebook Are Thriving
Health misinformation on social media thrives on adaptability. It continually changes keywords, employs euphemisms, and redirects believers to newer, less moderated platforms.
Anti-vaccine communities have honed these strategies over the years, enabling them to stay visible despite platform crackdowns.
Anti-vaccine content remains resilient due to its networked nature, according to Vox. Public anti-vaccine pages often form connections with one another, sometimes being overseen by the same influencers.
When one is taken down, others step in seamlessly. This network also guides banned group members to the next version or diverts them to platforms more receptive to conspiracy theories.
Members of these communities understand the importance of engagement. Anti-vaccine influencers actively seek likes and shares to enhance their visibility on Facebook, and their followers oblige, perpetuating the cycle.
When moderation efforts lead people to seek anti-vaccine content elsewhere, it can inadvertently pull them into more extreme spaces. Researchers observed an uptick in links to alternative social media platforms like BitChute, Rumble, and Gab, favored by far-right and white supremacist users.
"There's a broader ecosystem and there's demand for this content," David Broniatowski, one of the study's authors said.
He added that a particular post or article can be removed from Facebook. However, content with changes in engagement can signal that some people are still looking to find out more information about it.
To tackle this challenge effectively, experts suggest rethinking the entire information ecosystem. They recommend treating platforms like Meta as governed structures with science and safety-informed codes, rather than open mics with vague conduct guidelines.
The study's data offers only a limited glimpse into this complex ecosystem. It covers public Facebook spaces affiliated with vaccine-related keywords, excluding private groups and coded language usage. Additionally, it doesn't capture user experiences once they leave Facebook.
Aside from Meta, Twitter (now known as X) and YouTube had previously tried to combat COVID-19 misinformation by deleting posts/videos that violate their policies.
Read Also : Facebook is Thinking to Only Demote COVID-19 Misinformation Posts Instead of Removing Them