As artificial intelligence tools only get more advanced in generating a variety of content, experts are reportedly warning the public of their possible negative impacts on the upcoming elections, the first elections after AI became mainstream.
Associated Press reports that experts predict it will worsen in the upcoming presidential election. The controls put in place to combat false claims the previous time are fading, while the tools and systems that generate and propagate them are becoming more powerful.
Oren Etzioni, an artificial intelligence scientist and retired professor at the University of Washington, predicts a flood of disinformation. That is impossible to demonstrate. The former professor says they want to be proven wrong, but it is possible since all of the "ingredients are there."
Deepfakes, or generated pictures, videos, and audio samples, have begun to appear in experimental presidential campaign advertisements.
According to Etzioni, more malicious versions might readily spread on social media without labeling and confuse voters days before an election.
Etzioni states that they could imagine a political contender like President Biden, or any other candidate, through generative AI, say and do things they have never said.
Read Also : Big Tech Firms Loosening Safeguards on Content Moderation Ahead of 2024 Elections, Sparking Concerns
Election Misinformation Incidents
The Guardian has also recently reported on a similar story back in July this year, noting that AI could just be another tool to sow misinformation during the upcoming elections, citing that back in the 2016 presidential election, far-right activists, foreign influence efforts, and fake news sites used social media platforms to propagate disinformation and exacerbate divides.
Four years later, the 2020 election was reportedly riddled with conspiracy theories and false allegations of vote fraud, which were broadcast to millions, sparking an anti-democratic effort to overturn the election.
According to PBS, by 2024, generative AI will be able to not only quickly produce tailored campaign emails, messages, or films, but it will also be able to mislead voters, spoof politicians, and undermine elections on a scale and at a pace never seen before.
According to Ben Winters, a senior counsel at the Electronic Privacy Information Center, a non-profit committed to privacy research, the trust might decrease, making it more difficult for journalists and others to share accurate information, according to the Guardian story.
Winters adds that AI-generated misinformation has no beneficial consequences on the information environment.
Former President Donald Trump, campaigning for re-election in 2024, has shared AI-generated content with his social media followers. Trump used an AI voice-cloning technology to generate an altered video of CNN anchor Anderson Cooper that he uploaded on his Truth Social platform on Friday, which twisted Cooper's reaction to Trump's CNN town hall.
OpenAI on AI Misinformation
OpenAI CEO Sam Altman has also reportedly seen AI's potential threat during the upcoming elections as he warned a Senate panel in Washington back in May of this year that the models driving the current generation of AI technology may be used to influence consumers.
The broad potential of these models to influence and convince, to deliver one-on-one interactive misinformation, he added, is a significant source of concern.
Altman adds that regulation would be prudent as people need to know if one is conversing with an AI or if the content being viewed is AI-generated.