The New Hampshire Attorney General's Office stated Thursday that a Louisiana political consultant was charged with a fraudulent robocall impersonating US President Joe Biden to deter Democratic primary voters.
Steven Kramer, 54, faces 13 felony voter suppression charges and misdemeanor counts for impersonating a candidate after thousands of New Hampshire residents received a robocall asking them not to vote until November, according to Reuters.
On Thursday, the Federal Communications Commission (FCC) recommended a $6 million punishment for the robocalls, which employed an AI-generated deepfake Joe Biden voice. The FCC prohibits sending erroneous caller ID information. It also requested a $2 million sanction for Lingo Telecom, which transmitted the Biden robocall.
Washington worries that AI-generated information might mislead voters in November polls. Before the elections, some US senators campaigned for AI-related electoral integrity laws.
Attorney General John Formella reiterated New Hampshire's commitment to ensure that the upcoming US elections "remain free from unlawful interference" and that their investigation persists.
On Wednesday, FCC Chairwoman Jessica Rosenworcel recommended mandating disclaimers for AI-generated material in radio and TV political commercials, including candidate and issue ads, but not banning it.
AI Deepfakes to Impact 2024 US Elections But Not Enough Safeguards
The FCC expects AI to play a major role in 2024 election advertising, notably through deceptive "deep fakes," which are manipulated photos, videos, or audio recordings that misrepresent persons or events.
While AI-generated broadcast content challenges are addressed by this measure, for voters, AI deepfakes are more common online than in traditional media, and government restrictions for digital advertisements are lacking.
According to a Wired story, Public Citizen petitioned the Federal Election Commission (FEC) to require disclosures for all political commercials, regardless of media, similar to the FCC's plan. The FEC has not responded, but a January Washington Post article said it will rule by early summer.
This month, the US Senate Rules Committee advanced three AI election regulation proposals, including disclosure requirements, but they might not pass in time to have an impact.
US Rivals May Use AI Deepfakes to Influence 2024 Election: FBI
Implementing AI disclosure laws requires 166 days until the US presidential election. This urgency grows as Biden, Donald Trump, and down-ballot candidates increase social media ad expenditures. In the meantime, tech corporations will lead the fight against election disinformation without government rules.
Early this month, a top FBI official cautioned that the US' foreign opponents might use AI to influence the 2024 elections and spread disinformation, calling the technology "likely to see growth over the coming years," as previously reported by TechTimes.
According to authorities, AI technology reduces criminal protection and allows more sophisticated foreign governments to influence elections. Russia, Iran, and China top the FBI's election-year fears. Officials have attributed these nations' impact on US elections to various motives.
Intelligence officials say Russia hacked and published Democratic emails to boost Trump in 2016 and 2020. According to a new intelligence community investigation, Russia attempted to harm the Democratic Party in the 2022 midterms to lower US support for Ukraine and voter trust.
According to the report, China targeted its critics and discreetly blasted a US senator for influencing various battles between both main political parties. The report also accused Iran of using social divisions for covert purposes.
FBI officials expect China to divide the 2024 elections and are considering how the Ukraine situation would affect Russia.