Local newsletter Arizona Agenda has reportedly leveraged artificial intelligence (AI) and the likeness of Republican senate hopeful Kari Lake to create a fake video and help warn viewers and voters of how easy it is to fool someone using deepfakes.
The AI deepfake starts as a ruse, informing readers that Lake, a hard-right politician that the Arizona Agenda has previously attacked, has chosen to record a commendation about how much she loves the publication.
The film closes by warning viewers that they witnessed a sneak peek of AI in the upcoming elections.
The newsletter cautioned that since AI technology has become incredibly powerful and accessible, this fall's election will mark the first time anyone with a computer can produce convincing films.
Read Also: Tennessee Enacts 'ELVIS Act,' First Law Against Unauthorized AI Imitations of Musicians, Artists
Kari Lake Responds
Tens of thousands of people had watched the deepfake by Saturday. The real Lake, whose campaign lawyers had issued the Arizona Agenda a cease-and-desist letter, was none too pleased.
The aforementioned deepfake videos were ordered to be taken down immediately from all platforms on which they had been shared or distributed. The letter threatened to utilize all legal means at Lake's campaign's disposal if the media did not cooperate.
Agenda writer Hank Stephenson reportedly stated that it is "terrifyingly difficult" to identify bogus political content and "terrifyingly easy" to construct an AI-generated video of a politician.
In October, former TV anchor Lake-who had previously run for governor of Arizona-announced her Senate candidacy.
One of the party's most well-known members, Lake, is supported by former President Trump. For the past few years, she has been suing the state over multiple election-related issues, refusing to acknowledge her loss in the 2022 gubernatorial contest.
AI Deepfake Ban
According to the Washington Post, there are indications that AI and the anxiety surrounding it are already impacting the elections.
The makers of an advertisement featuring former President Donald Trump's well-known public faux pas were wrongly accused of trafficking in AI content late last year.
Meanwhile, real-life manipulated photos of Trump and other politicians, intended to both flatter and embarrass, have repeatedly gone viral, causing havoc during a pivotal moment in the election campaign.
With the rise of deepfakes and the upcoming elections, the Federal Trade Commission (FTC) recently proposed changes to its deepfake ban.
This amendment is meant to protect all customers against AI-generated impersonation, as opposed to just companies and governmental organizations, as it does now.
The suggested changes are a reaction to growing public uproar about the negative effects of deepfake deception and worries about impersonation fraud.
The proposed modifications by the FTC also consider outlawing businesses from intentionally providing goods or services that enable customers to deceive others by impersonating them, which would increase the extent of liability.
Related Article: Deepfakes are Getting Harder to Identify But Scientists Suggest Using AI to Detect Real Signs of Life - Why?
(Photo: Tech Times)