A new proposed modification of the Federal Trade Commission's (FTC) deepfake ban is reportedly on the horizon, seeking to cover all consumers against artificial intelligence-generated (AI) impersonation. Presently, the ban only covers businesses and government agencies.
As per FTC, the proposed modifications are being finalized in light of the growing number of complaints regarding impersonation fraud and public uproar on the harms done to impersonated people and customers.
Alongside this proposed more comprehensive protection against deepfake impersonations, the FTC is reportedly also considering whether to make it illegal for a company, like an AI platform that generates text, video, or images, to offer products or services that the company knows or have reason to believe are being used to deceive customers by impersonating them.
FTC has also shared critical updates for its current government and business impersonation rule that has reportedly given the concerned parties better protection against deepfakes.
The update states that the finalization of the rule has given more powerful tools to combat con artists posing as companies or government agencies, such as the ability to file federal court cases directly against con artists to force them to return the money they have earned from their government or business impersonation schemes, in addition to the supplemental notice.
AI Deepfake Concerns
Tech Crunch adds that a recently done poll states 85% of Americans expressed either great or moderate concern with the proliferation of deceptive audio and video deepfakes. The Associated Press-NORC Center for Public Affairs Research also reportedly conducted a separate survey, showing that almost 60% of adults believe AI tools will distribute more incorrect and misleading information during the 2024 US election cycle.
This proposed deepfake ban comes after reports of residents of New Hampshire receiving robocalls posing as President Joe Biden's AI voice, telling them not to cast their ballots in last month's presidential primary and to save it for the general election in November.
Aside from prominent personalities being taken advantage of deepfake impersonation schemes, CNN also recently reported that Hong Kong authorities claim a finance employee of a multinational corporation was duped into giving $25 million to scammers who used deepfake technology to appear as the company's chief financial officer during a video conference call.
The worker was tricked into attending a video conference with what he believed to be other employees. Still, they were all deepfake recreations, according to Hong Kong police, who revealed the complex scam at a briefing on Friday.
Accord Against AI and Deepfake Misuse
In light of the abuse of using AI and Large Language Models (LLMs) in impersonation and fraud schemes, six major tech companies, Google, Adobe, Microsoft, OpenAI, Meta, and TikTok, recently unveiled they are creating an accord against AI and deepfake misuse. The accord resembles a manifesto asserting that there are risks to fair elections from AI-generated content, much of which is produced by the companies' tools and shared on their platforms.
The pact reportedly suggests ways to reduce that risk, such as marking content that may be suspected of being artificial intelligence (AI) and informing the public about the risks associated with AI.