Deepfakes are Getting Harder to Identify But Scientists Suggest Using AI to Detect Real Signs of Life - Why?

This method was inspired by sci-fi classics like "Blade Runner."

Detecting deepfake voices is becoming increasingly challenging due to advancements in artificial intelligence, as seen in recent instances such as the fake Joe Biden robocall and the counterfeit Taylor Swift cookware ad on Meta.

However, scientists at Klick Labs propose an alternative approach: using AI to identify authentic human characteristics.

Fighting AI Deepfakes with AI

Drawing inspiration from their clinical studies utilizing vocal biomarkers for healthcare improvements and their fascination with sci-fi classics like "Blade Runner," researchers at Klick Labs developed a method for detecting audio deepfakes.

This method focuses on identifying signs of life, such as breathing patterns and micropauses in speech, which are often absent in fabricated content.

According to Yan Fossat, senior vice president of Klick Labs and principal investigator of the study, this innovative approach leverages vocal biomarkers, previously imperceptible to the human ear, to differentiate between genuine and fake audio content.

"Our findings highlight the potential to use vocal biomarkers as a novel approach to flagging deepfakes because they lack the telltale signs of life inherent in authentic content," said Yan Fossat.

"These signs are usually undetectable to the human ear, but are now discernible thanks to machine learning and vocal biomarkers."

The study details how the combination of vocal biomarkers and machine learning can effectively discern between authentic and manipulated audio.

Through extensive analysis involving 49 participants with diverse backgrounds and accents, the researchers trained deepfake models on voice samples and successfully distinguished between real and fake audio with approximately 80 percent accuracy.

These findings come amidst a backdrop of rising concerns over voice cloning scams, Meta's plans to introduce AI-generated content labels, and regulatory actions like the Federal Communications Commission's (FCC) decision to outlaw deepfake voices in robocalls.

With the looming specter of deepfake misuse, the scientists emphasized the importance of continuously improving detection technology to counter increasingly realistic fabrications.

Klick Applied Sciences is a multidisciplinary team comprising data scientists, engineers, and biological scientists. They spearhead scientific research and develops AI/ML and software solutions to support commercial endeavors leveraging the company's extensive expertise across various domains.

The study, titled 'Investigation of Deepfake Voice Detection using Speech Pause Patterns: Algorithm Development and Validation,' was published in JMIR Biomedical Engineering.

FTC's Deepfake Ban

In related news, the Federal Trade Commission (FTC) is considering amendments to its existing deepfake ban, aiming to extend protection against AI-generated impersonation to all consumers, beyond the current coverage of businesses and government agencies.

The proposed modifications come in response to escalating concerns surrounding impersonation fraud and public outcry over the detrimental impacts of deepfake deception.

The FTC's envisioned revisions also contemplate prohibiting companies from knowingly offering products or services that facilitate customer deception through impersonation, thereby broadening the scope of accountability.

Additionally, updates to the existing government and business impersonation rule empower affected parties to pursue legal action against perpetrators, including seeking restitution for losses incurred from impersonation schemes.


ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion
Real Time Analytics