Pop icon Taylor Swift is considering taking legal action against a deepfake porn website that features AI-generated explicit images of her, according to a report.
The adult site, which boldly violates state porn laws, targeted Taylor Swift with graphic images depicting her engaging in sexual acts while dressed in Kansas City Chiefs memorabilia within a stadium setting.
The Daily Mail reported that a source close to Swift revealed on Thursday, Jan. 26, that the decision to pursue legal action is under consideration. The source emphasized the abusive, offensive, and exploitative nature of these fake AI-generated images, stressing that they were created without Taylor's consent or knowledge. The immediate removal of the responsible X account was highlighted, raising concerns about social media platforms allowing such explicit content dissemination.
Social Media Sites Take Down Taylor Swift Deepfakes
The singer-songwriter discovered these Taylor Swift deepfake images proliferating on various platforms, including X, Facebook, Instagram, and Reddit. X, and Reddit initiated the removal of these posts on Thursday morning. Meta, the parent company of Facebook and Instagram, stated that the Taylor Swift deepfakes violate their policies, ensuring active removal and appropriate action against involved accounts.
The disturbing deepfake site operates seemingly hidden in plain sight, concealing itself behind proxy IP addresses. Social media trolls reposting these explicit images exacerbate the problem even more.
For months, Taylor Swift has endured misogynistic criticism for her public support of her partner, Kansas City Chiefs player Travis Kelce, through her attendance at NFL games. Addressing the backlash in an interview with Time, Taylor Swift remarked, "I have no awareness if I'm being shown too much and pissing off a few dads, Brads, and Chads," according to NBC News.
Tech Firms, Politicians Taking Action Against AI Deepfakes
As calls mount for the takedown of the offending website and criminal investigations into its operators, The Taylor Swift deepfake issue highlights the urgent need for legislative measures to curtail the rampant proliferation of AI deepfake content across online platforms. Currently, there is no existing federal law in the US regulating the creation and dissemination of nonconsensual sexually explicit deepfakes.
Congressman Tom Kean, Jr., a vocal advocate against the unchecked advancement of AI technology, emphasizes the need for safeguards to combat this concerning trend. He introduced the AI Labeling Act, aiming to establish significant steps forward in addressing the challenges posed by deepfake content.
Representative Joe Morelle, a Democrat from New York, filed a bill in May 2023 aimed at criminalizing nonconsensual sexually explicit deepfakes on a federal level. However, the bill has not progressed further, even though a vocal teen deepfake victim lent support to it in early January.
Earlier this year, TikTok mandated labeling all realistic deepfakes or manipulated content and prohibited deepfakes featuring private figures and young individuals, per The Independent. Meta, OnlyFans, and Pornhub have collaborated on an online tool called Take It Down, enabling users to report explicit content about themselves on the internet.
According to fraud-detection company Sensity AI, over 90% of currently active deepfakes are pornographic.