Artificial intelligence (AI) continues to be a remarkable technology, but a new Google-authored research claims that AI-generated images are now quickly growing as a source of misinformation.
AI technology has given rise to new forms of reality-warping misinformation online, from fake images of war to celebrity hoaxes. The study, which was co-authored by researchers from Google, Duke University, and several fact-checking and media groups, was released as a preprint last week.
It involves a massive new dataset of misinformation dating back to 1995 that was verified by websites such as Snopes. According to the researchers, the data shows that AI-generated photos have quickly gained popularity, almost as prominence as more traditional forms of manipulation.
The study, initially reported by 404 Media after being discovered by the Faked Up newsletter, clearly shows that until early last year, the researchers said that AI-generated images "made up a minute proportion of content manipulations overall."
Wave of AI Hype Equals AI-Generated Misinformation
Major tech companies like OpenAI, Microsoft, and Google released new AI image-generation tools last year. According to the report, disinformation produced by AI is now almost as prevalent as text and other content modifications.
According to the researchers, the increase in fact-checking AI picture searches paralleled a general surge in AI hype, which might have caused websites to concentrate more on the technology.
The research indicates a slowdown in fact-checking AI in recent months, but traditional text and visual manipulation has increased. According to the study, which also examined other media, video hoaxes now account for almost 60% of all investigated claims involving media.
Troubling AI Deepfakes
AI deepfakes are now also a prominent cause of AI-generated misinformation. Cybersecurity experts recently warned that these scams are becoming more common and sophisticated.
Great Hill Partners cybersecurity specialist Jason Hogg pointed out that deepfakes of senior executives might propagate misinformation, influence stock prices, destroy people's reputations, and distribute hazardous content.
According to Hogg, a former FBI special agent, deepfakes might be produced by generative AI utilizing digital data from social media and other sources.
Hogg believes that because effective cybercrime prevention requires thorough study to build systems, procedures, and controls to combat emerging technology, the more enormous challenges will continue to accelerate.
Cybersecurity experts advise strengthening defenses through improved staff training, frequent cybersecurity testing, and implementing multi-layered transaction approval processes to prevent AI-powered threats.
One unsettling incident reportedly involved a Hong Kong-based financial worker duped into giving scammers around $25 million. The scammers tricked the employee into making the transfer by posing as coworkers over a video conversation using deepfake technology.
Arup, a UK engineering firm involved in the Hong Kong case, acknowledged the issue but did not disclose more information as the inquiry was still ongoing.
Netskope Chief Information and Security Officer David Fairman claimed that hackers are now able to carry out complex schemes with less technical expertise due to AI technologies that are widely available.
Related Article : IS Terrorist Group Leverages AI Deepfakes for Propaganda