University of Maryland (UMD) researchers undertook a comprehensive investigation into the manipulation and removal of watermarks, solidifying concerns about the current reliability of watermarking solutions. Their experiments demonstrated the ease of bypassing existing watermarking methods and even adding counterfeit symbols to non-AI-generated images.
Manipulating Watermarks
Researchers hailed from the University of Maryland (UMD) embarked on a mission to thoroughly investigate the ease with which malicious individuals can manipulate or erase watermarks, as reported by Engagdet.
Soheil Feizi, a distinguished professor affiliated with UMD, shared that his team's recent discoveries have solidified his doubts regarding the availability of reliable watermarking solutions in the present landscape.
During their experiments, the researchers demonstrated an ability to effortlessly bypass prevailing watermarking methods, and they discovered that implanting counterfeit symbols onto images, not AI-generated, posed an even simpler task.
Regarding one of the two AI watermarking approaches he investigated in his recent study, namely, "low perturbation" watermarks, Professor Soheil Feizi expresses an even more unequivocal stance.
He articulates this viewpoint with unwavering clarity that in the case of 'low perturbation' watermarks, designed to remain invisible, there exists no foreseeable avenue for success.
Feizi and his colleagues examined how easy it is for bad actors to get rid of watermarks, which they call "washing out" the watermark. They also showed how it's possible for attackers to add watermarks to regular images created by people, which can lead to incorrect results.
Timely & Relevant
While their paper is now available online and hasn't been reviewed by experts yet, it's worth paying attention to because Feizi is an influential figure in AI detection, even though it's still in the early stages of research. Wired reported that this research comes at an opportune moment.
Watermarking has emerged as a promising strategy to identify AI-generated images and text. Just as physical watermarks are embedded on paper money and stamps to verify their authenticity, digital watermarks serve to trace the origins of online content, aiding in the detection of deepfake videos and texts generated by bots.
With the upcoming 2024 US presidential elections, concerns regarding manipulated media are running high, and some individuals are already falling victim to deception. For instance, former US President Donald Trump shared a fabricated video of Anderson Cooper on his Truth Social platform, where Cooper's voice had been replicated using AI.
In a collaborative research endeavor involving the University of California, Santa Barbara, and Carnegie Mellon University, scientists uncovered that simulated attacks easily rendered watermarks removable.
The study distinguishes between two primary methods for eradicating watermarks through these simulated assaults: destructive and constructive strategies. In destructive attacks, malicious actors treat watermarks as an integral part of the image.
Modifying attributes such as brightness, contrast, employing JPEG compression, or even simple image rotation can effectively eliminate the watermark. However, the drawback lies in the fact that while these techniques eliminate the watermark, they also significantly degrade image quality, rendering it visibly inferior.
Related Article : This Watermark-removing AI Could Major Problem for Pro Photographers; Here's What You Can Do