A third-grade teacher has reportedly been arrested after possessing child pornography, along with AI-generated child porn, created using yearbook photos of three students.
The Pasco County Sheriff's Office reports that Steven Houser, 67, an educator who teaches science to third graders at Beacon Christian Academy in New Port Richey, is the accused.
(Photo: ALEXANDRA ROBINSON/AFP via Getty Images) A research team has comprehensively analyzed deepfake content in the context of the Russia-Ukraine conflict.
After receiving a tip from Houser, the sheriff's department began an investigation. According to a press release from PCSO, Houser had three videos and two images featuring child pornography.
The sheriff's office said none of the media featured his students. Deputies found that Houser admitted to using three student yearbook photographs to create AI-generated child pornography.
According to sources, Beacon Christian Academy has not responded to inquiries regarding Houser's arrest or whether he was still working there.
Read Also: Fake Biden 'Pedophile' Video on Facebook Now Considered Malicious, Meta Oversight Board Rules
Laws Against AI-Generated Porn
The case proves to be another case that shows the reality of AI Deep fakes and AI-generated porn. Last month, middle schoolers at Beverly Hills Schools used AI to develop and distribute nude photos that included other students' faces. This prompted the local police department to launch an investigation.
The investigation has prompted questions about the gaps in the law that prohibit AI-generated porn. It is reportedly possible for an eighth-grader in California to face legal repercussions for sharing an unconsented nude photo of a classmate, but it is stated to be unclear whether any state laws would be applicable if the image is a deepfake produced by AI.
This led to calls for Congress to prioritize children's safety in the United States. Technology, particularly social media AI, has the potential to be used for good, but if left unchecked, it can also be completely harmful.
According to Santa Ana criminal defense lawyer Joseph Abrams, an AI-generated nude does not represent a genuine person. He clarified that it was not child porn but rather child erotica. Furthermore, he stated that from his perspective as a defense lawyer, it does not violate any other statute or this specific one.
Deep fakes have reportedly caused significant risks globally for a long time. One cause of this alarming issue is the accessibility of AI tools, such as DALL-E and Stable Diffusion, which have enabled individuals with limited technical knowledge to create deepfakes.
Actions Against AI Deepfakes
To mitigate this issue, the Biden administration supports digital watermarks, and Google and Meta use "digital credentials" to identify content produced by AI, raising public awareness and making removal simpler.
Working under the Coalition for Content Provenance and Authenticity (C2PA) guidelines, OpenAI creates detection tools such as hidden metadata and visible watermarks. Extra security is offered by specialized platforms like Sensity, which are made to authenticate the source of content.
New defensive tools like Nightshade are designed to prevent picture alteration. They incorporate undetectable signals that interfere with AI processing while maintaining the image's intended appearance for human viewers.
Related Article: Taylor Swift Considering Legal Action Against Deepfake Porn Site Circulating Explicit AI Images