Twitch Streamer Atrioc’s Deepfake Porn Controversy Sparks Wisesweeping Debate on AI Efficacy

There's truly no getting past the concept of artificial intelligence as it continues to seep into the collective unconscious through a swath of different avenues. Leading into 2023, ChatGPT proved to be a major highlight, and still very much is, as Google pours nearly $400 million into the purchase of its contending program, Anthropic. An entire recreation of the Seinfeld show via AI also became of major interest, specifically on Twitch's live broadcasting site, where it was recently banned for transphobia.

But Twitch also saw its own swirling controversy over AI in another way when live streamer Atrioc accidentally showed a deepfake porn website live to several thousand viewers on Monday, Jan. 30. Later that same day. He came forward on a live broadcast to apologize. The website in question, which has since been taken down by its creator, showed several images of fellow content creators, like Pokimane, QTCinderella, Maya, and more, all AI-generated in lewd scenarios.

This incident became a major talking point across social media as the deepfake concept took the storm. Despite its relative newness to some, deepfakes have been around for quite some time, and their very existence leaves many open questions on our understanding and our overall purpose for using AI going into the future.

One streamer involved in the affair, QTCinderella, went live the same day to discuss her thoughts and opinions on the matter, tearfully citing, "this is what the pain looks like." She wasn't alone, as several additional female content creators and streamers came forward with takes just like her own. The most prominent among these was Maya Higa's, who poignantly likens the deepfake content in question as having similarities with a prior trauma involving sexual abuse.

There is no question that such deepfake content made against the imaged individual's consent is wrong, and it thus opens up a pathway to some of the more dangerous concepts AI can and already has led to as it swiftly evolves. Sure, seeing oneself in faked pornographic images can be debilitating, but what if it was in far worse conditions? Look no further than Asmongold's recent stream to see the creator's use of an Asmon AI, which mimics the real Asmongold nearly to a T, as evidence of the fact that the technology can be quite alarming if put into the wrong hands.

Even ChatGPT's own creator thinks that AI needs to be regulated, much akin to similar tech ecosystems, like cryptocurrency. Only recently has the US begun to approach the world of AI with some gloves, as evidenced in the 2021 National Artificial Intelligence Initiative. The initiative doesn't simply focus on the domestic security necessities one may think, however, as it also is intended "to accelerate AI research and application for the Nation's economic prosperity." Still, for deepfake porn specifically, states such as Texas, California, and Virginia are the only ones that highlight them as illegal.

Similar legislative-level efforts are being devised in the European Union, New Zealand, and Australia, but remain a bit behind. For the latter two specifically, many laws surrounding AI remain mere frameworks and nothing official to go on. However, the EU's more recent AI Act is definitely something of note, targeting "high-risk applications," like CV-scanning and ranking tools.

With the world of artificial intelligence only growing and accelerating almost at a breakneck pace, especially when peeking at the meteoric rise in AI stocks of late, better avenues for compensation, repercussion, and elimination are needed for such frauds. Deepfake content is like stealing one's identity, and the more money that gets funneled into the sphere, reaching as high as $15.27 trillion by 2030, the better - and thus more menacing - it will only become.

ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion
Real Time Analytics