The European Union (EU) has taken aim at X (formerly Twitter) for its controversial paid verification system, deeming it a violation of the Digital Services Act (DSA). This decision comes after an ongoing investigation into X's practices regarding content moderation, advertising transparency, and data access for researchers.
X's Blue Checkmarks Under Scrutiny
The core issue lies in X's blue verification checkmarks, originally intended to confirm the identities of public figures and celebrities. Now, anyone can obtain this badge by subscribing to a premium service. The EU Commission argues that this practice misleads users, 9to5Mac reports.
"The design and operation of the blue verification checkmarks do not correspond to industry standards and deceive users. Since anyone can subscribe to become 'verified,' it hinders users' ability to make informed decisions regarding the authenticity of accounts and the content they encounter. There's evidence of malicious actors exploiting the 'verified account' status to deceive users, " the Commission stated.
Related Article : X's Block Button GetsTweaked, Blocked Content Still Visible-Is Blocking the Only Thing Disappearing?
Lack of Advertising Transparency
Another point of contention is X's advertising practices. The EU claims X fails to comply with essential transparency requirements for advertising. The platform lacks a searchable and dependable advertisement archive, making it difficult for users to understand the context and origin of the advertisements they see.
Limited Data Access for Researchers
The investigation also revealed shortcomings in X's data accessibility for researchers. The DSA emphasizes providing researchers access to public data for analysis. However, X appears to restrict this access by:
Prohibiting independent data scraping by qualified researchers, as outlined in their terms of service.
Implementing an application programming interface (API) access system that seems to "dissuade" researchers from conducting their projects.
Possibly forcing researchers to pay "disproportionately high fees" for access.
Erosion of Content Moderation
The EU's findings come amidst concerns about X's declining content moderation efforts since Elon Musk's takeover, The Financial Times writes in its report.
Critics highlight a decrease in content moderation staff and a lack of action against harmful content, including AI-generated terrorism imagery and conflict misinformation.
Potential Consequences for X
The Commission's preliminary findings have been sent to X. While X has the right to defend itself if the EU upholds its current stance, X faces potential repercussions including a fine of up to X's global annual revenue. There will also be mandated changes where X will be required to address the identified violations.
This decision by the EU sets a significant precedent for social media platforms operating within the European market. It tackles the importance of prioritizing user trust and combatting manipulation through clear and reliable verification systems, transparent advertising practices, and open data access for legitimate research endeavors.
Until Musk hasn't yet seriously addressed the spread of fake X accounts, the platform will always be subject to investigation.