For several months, OpenAI has taken it upon itself to fight against the spread of disinformation created using its tools by countries like Russia, Iran, and more in a recent campaign. The company revealed that it could monitor certain access to its generative AI systems, which were used to create these deceptive campaigns that did not gain traction online.
OpenAI Thwarts AI Disinformation Campaigns
OpenAI released a new blog post detailing its latest report about how it prevented the spread of a massive AI-generated disinformation campaign from countries including Russia, Iran, Israel, and China. The post centered on OpenAI's detection of "covert influence operations" (CIO) from these countries that aim to dissuade and manipulate the public in various ways.
These CIOs may have been motivated by influencing political disputes or public opinion, but nonetheless, OpenAI went the extra mile to prevent these campaigns from succeeding.
According to OpenAI, the four countries above launched five CIOs, namely Bad Grammar and Doppelganger from Russia, China's Spamouflage, Iran's International Union of Virtual Media (IUVM), and the Zero Zeno operation targeting Israel's STOEIC.
AI Tools for Online Disinformation
For three months, OpenAI closely monitored the situation where these bad actors used its AI tools to create their deceptive push. The company claimed they disrupted as many as five CIOs to launch deceptive activities online. In its monitoring earlier this month, the said campaigns did not get the right traction or reach their target audiences thanks to OpenAI's efforts.
OpenAI's Efforts for Safer Generative AI Use
International adversaries and threat actors launch massive misinformation campaigns against their enemies. With the massive rise of generative AI, they have found a way to make it easy and widespread. In the previous US elections, it was hit with a misinformation campaign believed to be Russian, with OpenAI recently stalling its voice cloning tool release to prevent it from being used by bad actors.
Recently, OpenAI also introduced its election misinformation policy, which aims to combat the use of its AI for deepfake, impersonation, and misinformation generation and protect the sanctity of a user's vote.
In various efforts, the company took it upon itself to improve its data and information generation and avoid sharing fake news or hallucinations with users searching for answers.
Despite OpenAI's efforts, many bad actors can still access its AI and create campaigns in it, which could have global ramifications if the company had not stepped in. The latest report from the renowned AI company detailed how it prevented such covert influence operations from disrupting the world, especially among countries that want to manipulate the world with their AI-generated campaign attempts.