Facebook to Develop a Model to Fight Deepfakes Technology – Protection Against Future Threats

Facebook to Fight Deepfakes Technology
Getty Image: Brendon Thorne

As of June, deepfakes are not much of Facebook's concert. However, the tech giant is still motivated to guard its company and users against future threats that could destroy their credibility.

Facebook now has a well-funded research team that plans to distinguish a way to combat deepfakes.

The team's latest project is in collaboration with numerous academics hailing from Michigan State University. They joined together to create a method that can reverse-engineer deepfakes by analyzing artificial intelligence-generated imagery to reveal the machine learning model's identifying characteristics.

How Facebook's Plan Will Help the Company

Facebook's research will be helpful because it will help the social media company in tracking down notorious actors who spread deepfakes through multiple Facebook accounts.

The Verge reported that its content may or may not include misinformation, and non-consensual pornography, a typical application of the deepfake technology.

The project is still in the works, and it is not yet for deployment.

Combatting Deepfakes Technology

Previous researches in the area of deepfakes technology could determine the specific AI model that generated a deepfake.

However, Facebook's project will take it up a notch by identifying the specific architectural traits of the unknown models. These traits are referred to as hyperparameters. They are tuned in each machine learning model as parts of its engine.

Collectively, these hyperparameters leave traces of unique fingerprints found on the finished image. Later on, it will be used to identify its source.

According to a Soft Pedia News report, Tal Hassner mentioned that it is essential to identify the traits of unknown models because deepfake software is easy to customize.

How The Project Work

Due to easy customization, bad actors can cover their tracks when investigators trace their locations and activities.

Hassner said that by using the new AI model, Facebook would have an easier time tracking deepfakes. If we assume that a bad actor started generating various deepfakes and spreading them on numerous platforms, the AI can detect that the photos all came from one device or model.

Catching the culprit is made more accessible by the model, Hassner added.

He compared the model to several forensic techniques used to identify the specific camera model used to take a photo by simply looking at the patterns in each image.

Furthermore, he stated that anyone with practical experience in the field and a standard computer could quickly cook their model to generate deepfakes.

In 2020, Facebook hosted a deepfake detection contest. The winning algorithm was able to detect AI-manipulated clips 65.18% of the time. Although impressive, it still is not high enough to be reliable at all times.

In the meantime, it is best to think twice before posting and sharing anything on social media because deepfake technologies can quickly ruin a person's reputation.

This article is owned by Tech Times

Written by Fran Sanders

ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion
Real Time Analytics