Warnings regarding generative artificial intelligence only continue amidst the upcoming elections as reportedly seen by the most recent federal bulletin by the Department of Homeland Security.
The research developed by the Department of Homeland Security and distributed to law enforcement partners nationally suggests that local and foreign actors could use technology to pose significant obstacles leading up to the 2024 election cycle.
Federal bulletins are infrequent communications that alert law enforcement partners to particular threats and concerns.
According to the warning, AI capabilities could aid efforts to sabotage the 2024 U.S. election cycle. A range of threat actors are expected to try to influence and sow unrest during this election cycle.
Generative AI techniques will probably give foreign and domestic threat actors more opportunity to meddle in the 2024 election cycle by exacerbating emergent events, interfering with election procedures, or targeting election infrastructure.
During a Senate Intelligence Committee hearing last week, Director of National Intelligence Avril Haines also cautioned Congress about the dangers of generative AI, stating that the technology can produce realistic "deepfakes" whose source can be hidden.
Since refuting or countering the misleading information circulating online could take some time, the timing of AI-generated media related to an election can be just as important as the content itself, as stated in the bulletin.
The report also mentioned the threat that still exists abroad, stating that in November 2023, an AI-generated video urged voters in a southern Indian state to support a certain candidate on election day, leaving officials with little time to refute the video.
AI Giants Against AI Disinformation
Microsoft and OpenAI have recently established a $2 million fund to fight deepfakes and fraudulent AI content in response to the threat posed by GenAI.
Concerns over misinformation produced by AI have reached a critical mass as an unprecedented 2 billion voters prepare to cast ballots in 50 different countries this election year. Misinformation specifically targets vulnerable communities.
With the advent of generative AI technologies, such as chatbots like ChatGPT, the amount of tools accessible for creating deepfakes has increased substantially. These open-source resources can be used to produce phony photographs, audio clips, and movies of public officials.
An open-source deepfake detection tool has been made available to help scholars spot fraudulent content produced by the DALL-E image generator. The recently established "societal resilience fund" is essential to advancing the cause of ethical AI use.
Teresa Hutson, Corporate Vice President of Microsoft for Technology and Corporate Responsibility, emphasized the value of the Societal Resilience Fund in AI-related community initiatives. To counter AI misinformation, she emphasized Microsoft and OpenAI's dedication to collaborating with other like-minded businesses.
AI Deception Effectivity
Even though the quality of the newest AI technology is remarkable, most deepfakes lose credibility very soon, especially those coming from China and Russia intended for global influence campaigns.
Sources claim that experts have observed that AI-generated content is unlikely to alter most people's political opinions, regardless of how persuasive its images, videos, or audio samples may be.
Generative AI has been widely used in recent elections in Pakistan and Indonesia, but there is no proof that it has unfairly favored any particular candidate. Social media releases a ton of content daily, making it difficult for AI-powered scams—even realistic ones—to acquire traction.