Meta, the parent company of Facebook and Instagram, asserted that its bid to combat coordinated disinformation campaigns created via generative AI is working amid widespread concerns over the technology.
Evaluation of Meta to Coordinated Inauthentic Behavior
According to AFP, Meta's most recent evaluation of the prevalence of coordinated inauthentic behavior across its platforms coincides with escalating concerns regarding the potential exploitation of generative AI to deceive or mislead individuals, particularly in the context of the forthcoming elections globally, especially within the United States.
During Wednesday's press briefing, David Agranovich, Meta's threat disruption policy director, conveyed that existing defenses within the industry, particularly the emphasis on behavioral analysis over content scrutiny to counter adversarial threats, have thus far demonstrated efficacy in mitigating the impact of such campaigns.
Agranovich highlighted that while generative AI misuse has been observed, it has not manifested in exceedingly sophisticated forms. Nevertheless, he underscored the inevitability of adversarial networks evolving their strategies in tandem with technological advancements.
Facebook has long been scrutinized for its role as a potent conduit for disseminating election-related disinformation. Now, the European Union (EU) is also investigating Meta's Facebook and Instagram platforms over alleged lapses in addressing disinformation in the lead-up to the EU elections in June.
Concerns among experts have escalated due to the prospect of a surge in disinformation campaigns perpetrated by malicious actors across Meta's platforms, particularly those using generative AI tools.
Generative AI for Fictitious Profiles
According to Meta, malicious actors have used AI to fabricate images, videos, and textual content. However, the report noted a lack of realistic depictions of political figures.
The report cited instances such as the utilization of generative AI to generate profile pictures for fictitious accounts across Meta's suite of applications and the creation of promotional materials for a fictitious pro-Sikh advocacy movement dubbed "Operation K" by a deception network operating from China.
Moreover, a network based in Israel purportedly disseminated AI-generated comments about Middle Eastern politics on the Facebook pages of media outlets and public figures. Meta likened these comments to spam and highlighted that genuine users often rebuffed them as propaganda.
Meta claimed that this campaign originates from a political marketing firm headquartered in Tel Aviv. Mike Dvilyanski, Meta's head of threat investigations, remarked on the absence of disruptive applications of generative AI by adversaries thus far, describing the current landscape as "exciting" albeit fraught with potential risks.
Moreover, the report highlighted the ongoing endeavors of a Russia-associated faction dubbed "Doppelganger" to utilize Meta's platforms to reduce support for Ukraine. Nonetheless, Meta's evaluation deems these endeavors mostly fruitless.
Meta also revealed the elimination of minor clusters of counterfeit Facebook and Instagram profiles originating from China, with a focus on the Sikh community across nations such as Australia, Canada, India, and Pakistan.