A recent study has revealed that a small number of accounts are responsible for the majority of fake news spread on the social media platform X, formerly known as Twitter (via Phys.org). 

Conducted by a team of social media analysts at Indiana University, the study shines the light on the alarmingly major role of "superspreaders" in disseminating misinformation on the internet.

The Dangers of Online Misinformation 

The researchers focused on the spread of misinformation, recognizing its severe implications for society. Misinformation can undermine trust in democratic processes and endanger public health. 

Notable examples include false claims about the 2020 US presidential election, which contributed to the January 6th Capitol riot, and misleading information about COVID-19, which caused confusion about health measures and vaccines.

According to the World Health Organization (WHO), nearly 6,000 people worldwide were hospitalized due to coronavirus misinformation in the first three months of 2020 alone. Researchers estimate that at least 800 people may have died because of misinformation related to COVID-19.

In 2023, a top EU official identified X as the leading source of fake news and urged compliance with EU disinformation laws.

Twitter URL Becomes X.com
(Photo : Leon Neal/Getty Images)
LONDON, ENGLAND - MAY 17: In this photo illustration, a web browser displays the URL www.twitter.com while displaying X.com, on May 17, 2024 in London, England. Today, the social media platform formerly known as Twitter transitioned to X.com URLs. Elon Musk acquired Twitter in 2022 and promptly rebranded the company as X.

Twitter's Misinformation Superpreaders

Superspreaders are users who share a disproportionate amount of false information. The study found that a very small number of users are responsible for spreading the majority of this misinformation. 

For instance, during the 2016 U.S. election, just 0.1% of Twitter users were responsible for sharing 80% of the false information. Similarly, during the COVID-19 pandemic, 12 accounts were identified as responsible for nearly two-thirds of the anti-vaccine content online.

To better understand and identify these superspreaders the team collected data over 10 months, analyzing 2,397,388 tweets posted by 448,103 users on X. They specifically looked for tweets flagged as containing low-credibility information. 

The researchers tested various methods to predict which accounts would continue to be superspreaders, using metrics like Influence (how often an account's posts are shared) and a modified academic impact score (h-index).

Read Also: New YouTube Browser Extension Includes Wiki-like Citations to Combat Misinformation

What They Found Out

A closer examination of the top superspreaders revealed that more than half were politically oriented accounts, including verified accounts, media outlets, personal accounts linked to those outlets, and influencers with around 14,000 followers. Superspreaders also tended to use more toxic language than typical users sharing false information.

Remarkably, the study discovered that approximately one-third of low-credibility tweets were posted by just 10 accounts. Moreover, 1,000 accounts were responsible for about 70% of such tweets. 

The study suggests that social media platforms like X might miss or ignore verified accounts with large followings that are major spreaders of misinformation. 

Following its rebranding and changes under Elon Musk's leadership, the platform has seen shifts in its approach to handling misinformation. 

Stay posted here at Tech Times.

Related Article: TikTok Implements Major Policy Changes to Combat Misinformation, Harmful Content

Tech Times Writer John Lopez

ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion