Deepfake scams are increasingly targeting businesses, resulting in significant financial losses worldwide.
Cybersecurity experts warn that these scams, using advanced artificial intelligence to create convincing fake videos, audio, and images, are becoming more common and sophisticated, according to a report by CNBC.
The Rise of Deepfake Scams
One alarming incident cited by CNBC involved a finance worker in Hong Kong who was tricked into transferring over $25 million to fraudsters. The criminals used deepfake technology to impersonate colleagues on a video call, convincing the worker to make the transfer.
UK engineering firm Arup, involved in the Hong Kong case, confirmed the incident but withheld details due to an ongoing investigation. According to David Fairman, Chief Information and Security Officer at Netskope, the public accessibility of AI tools has made it easier for cybercriminals to conduct sophisticated scams without advanced technical skills.
Generative AI can produce realistic text, images, and videos, making it a powerful tool for manipulating digital content. An Arup spokesperson told CNBC that their operations face regular attacks, including invoice fraud, phishing scams, voice spoofing on WhatsApp, and deepfakes.
In the Hong Kong incident, the finance worker attended a video call with individuals he believed were the company's chief financial officer as well as staff members.
These participants were deepfakes created to deceive him into transferring a large sum of money. Arup confirmed the use of fake voices and images in this scam and reported a sharp increase in such attacks recently.
A similar incident occurred in Shanxi province, China, where a female financial employee was deceived into transferring 1.86 million yuan ($262,000) following a video call with a deepfake of her boss.
In August 2023, researchers at the Google-owned cybersecurity firm Mandiant documented cases of cybercriminals using AI and deepfake technology for phishing scams and misinformation. The use of such technology for these purposes is expected to accelerate with new generative AI tools.
Read Also : Online Hate Content Authors are Increasingly Using AI to Supercharge Propaganda, Experts Claim
Deepfakes Could Worsen and Accelerate for a While
Jason Hogg, a cybersecurity expert at Great Hill Partners, noted that deepfakes of high-ranking company members could spread false information, manipulate stock prices, damage reputations, and disseminate harmful content.
Hogg, a former FBI Special Agent, explained that generative AI could create deepfakes using digital information from social media and other platforms.
Hogg predicts that the broader issues will worsen and accelerate for a while because effective cybercrime prevention needs careful analysis to create systems, practices, and controls to combat new technologies.
To counter AI-powered threats, cybersecurity experts suggest enhancing defenses by enhancing staff education, conducting regular cybersecurity tests, and implementing multi-layered approval processes for transactions.