Teachers, Victims of Explicit AI Deepfakes Made by Students

Explicit AI deepfakes victimize everyone.

Teachers are now also becoming victims of sexual artificial intelligence-generated deepfakes as a catholic school in Melbourne, Australia, reportedly expelled a student for creating and spreading fake explicit AI images of a female teacher.

The male student was expelled from Salesian College, where its principal, Mark Ashmore, stated the school would ensure the affected teachers get the support and pastoral care they need.

Elevate Your Small Business with These Vital 10 Google Chrome Extensions
A lower school substitute teacher talks to a collegue (laptop), a 7th grade teacher and co-chair of the lower school, from her home due to the Coronavirus outbreak on April 1, 2020 in Arlington, Virginia. OLIVIER DOULIERY/AFP via Getty Images

Salesian College claims to have implemented instructional programs after the incident to teach students about cyber safety and polite relationships.

Australia, in general, has had growing cases of sexual unconsented AI deepfakes; as the Independent Education Union (IEU) states, there have been numerous cases regarding the problematic technology.

Approximately fifty Bacchus Marsh Grammar kids had pictures from their social media accounts altered with artificial intelligence this week to create deepfake nudities.

Cases of AI Deepfakes

Back in early April, a Tasmanian man became the first offender in his state for artificial intelligence-generated child abuse material. The 48-year-old man was convicted after possessing, uploading, and downloading hundreds of prohibited AI-generated content.

After being detained and charged, the Gravelly Beach man entered a guilty plea on March 26, 2024, for having access to and possession of child abuse materials.

He was dubbed the first-ever conviction in Tasmanian history utilizing AI-generated content meant for child exploitation by the Australian Federal Police at the time.

According to AFP Detective Sergeant Aaron Hardcastle, the investigation was significant since it marked the first time that police in Tasmania had discovered and seized artificial intelligence-generated proof of child abuse. According to reports, the content is "repulsive," regardless of whether AI created the image or featured a genuine kid victim.

Governments Against AI Deepfakes

Outside of Australia, the US has asked digital giants to support the effort to restrict damaging AI capabilities willingly. US officials believe that these kinds of nonconsensual AI photos-which include sexual images of minors, give their approval to several certain actions able to be produced, distributed, and profited from by the commercial sector.

Additionally, the White House suggested that tech companies impose restrictions on websites and programs that purport to let users snap and alter sexual images without getting consent from the subjects.

Last month, the Ministry of Justice in the United Kingdom likewise declared its intention to make the production of deep, fake, sexually graphic photographs without consent illegal.

The legislation subjects anyone who creates such an image without authorization to an endless fine and a criminal record. If the photo is shared more widely, the person could face jail time.

The government states that it will remain unlawful to produce a deepfake image regardless of the developer's intention to distribute it.

This rule is designed to support law enforcement in enforcing strict measures against an increasingly common practice of dehumanizing or upsetting victims. Since violence against women has been deemed a national threat in the UK, police response to it must take precedence.

Written by Aldohn Domingo
Tech Times
ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion
Real Time Analytics