A recent report by the Internet Watch Foundation (IWF) reveals a disturbing trend: AI is being used to create deepfake images of child sexual abuse. What's alarming is that the pictures used are based on real victims.
While AI tools used for generating these images remain legal in the UK, the creation of AI child sexual abuse images is illegal.
Real-Life Victims Targeted by AI
One harrowing case involves a victim, referred to as Olivia, who was subjected to rape and torture from ages three to eight. Despite her rescue by police in 2013, dark web users continue to exploit AI tools to create new abusive images of her.
The IWF discovered that a model for generating images of Olivia, now in her 20s, is available for free download. Additionally, a dark web forum shared links to AI models for 128 other named child sexual abuse victims.
"For many survivors, the knowledge that they could be identified, or even recognized from images of their abuse is terrifying," the organization said.
Related Article : UK Looks to Crack Down on Sexually Explicit Deepfake Images
The Impact of Deepfake Images on Survivors
A spokesperson for the IWF highlighted the continuous victimization of survivors like Olivia, whose abusive imagery is persistently shared, sold, and viewed online.
According to The Independent, the advent of generative text-to-image AI has intensified this torment, allowing perpetrators to produce unlimited images of the children. This ongoing circulation of abusive imagery inflicts mental torture on survivors, who fear being recognized or identified from these images.
The Scale and Realism of AI-Generated Images
IWF analysts found that 90% of AI-generated images were realistic enough to be classified under the same laws as accurate child sexual abuse material (CSAM).
Moreover, the images are becoming increasingly extreme, with some achieving near-flawless, photo-realistic quality. Because of the close realism of the pictures, IWF warns that hundreds of images can be created at the click of a button, exacerbating the suffering of survivors.
Urgent Need for Action
IWF chief executive Susie Hargreaves emphasized the need for immediate and effective responses from industry, regulators, and the government to address this growing threat.
More importantly, child predators should be detected quickly to put an end in the circulation of disturbing child sex abuse images online.
Richard Collard of the NSPCC echoed these concerns, highlighting the rapid development of AI-generated child abuse images and the lack of child safety considerations in AI product development. He called child protection a vital component of any government legislation on AI safety. He urged tech companies to take decisive action to prevent the spread of AI-generated abuse.
Government's Response to Prevalent AI Deepfakes of Child Sexual Abuse
A government spokesperson acknowledged the IWF report and pledged to consider its recommendations carefully. The increasing prevalence of AI-generated child, sexual abuse images, underscores the urgency of implementing robust measures to protect children and support survivors.
The rise of AI-generated deepfake child abuse images is a troubling challenge. It is crucial for all stakeholders, including governments, tech companies, and regulatory bodies, to collaborate and take strong, proactive measures to combat this issue.
By doing so, they can help protect vulnerable children and provide justice and support for survivors like Olivia.
Students are commonly the victims of AI deepfakes, but teachers are no exception. Tech Times reported last month that students from a Melbourne-based catholic school used AI to create explicit images of a female teacher.