Information-stealing malware logs on the dark web have identified numerous individuals who download and share child sexual abuse material (CSAM), highlighting a new law enforcement technique.

This information allowed Recorded Future's Insikt Group to identify 3,324 accounts visiting CSAM-distributing portals. Analysts used stolen data to trace these identities across platforms, obtaining usernames, IP addresses, and system characteristics, as reported by BleepingComputer.

Law enforcement has this information to identify perpetrators and make arrests. Infostealer logs like Redline, Raccoon, and Vidar include critical data, including passwords, browsing history, cryptocurrency information, and more. The dark web packs and sells these records, facilitating criminal activity.

Between February 2021 and February 2024, Insikt identified culprits by cross-referencing stolen credentials with known CSAM domains. They discovered unique username-password matches when they removed duplicates.

CSAM Criminals Exposed

Researchers may use information-stealing malware to link CSAM account users to email, banking, and social networking accounts. Digital currency transactions, browsing history, and autofill data provide additional insights.

Insikt's pioneering use of info stealer data demonstrates its potential to improve child sexual exploitation tracking and convictions.

This development comes as child predators are increasingly using artificial intelligence (AI) to create sexually graphic photographs of children, hampering law enforcement attempts to prevent internet sexual exploitation.

Stanford University's Internet Observatory found that AI-powered technologies have allowed criminals to create fake images and videos based on actual children's photos, increasing child sexual abuse content, CNA reported.

San Jose State University Department of Justice Studies Associate Professor Bryce Westlake noted that online criminals can create images of "anything they can imagine," highlighting law enforcement's concerns. He added that CSAM is now widespread on social media and private websites.

As of 2023, the National Center for Missing and Exploited Children's CyberTipline recorded over 36 million suspected child sexual abuse incidents, highlighting the pervasive impact on victims and their families.

The proposed Kids Online Safety Act in the United States and the Online Harms Act in Canada attempt to hold social media companies accountable for harmful AI-generated material. 

Read Also: Google's Climate Change Goals Fall as Emissions Blow Up Due to AI 

US-CYBER-DHS
(Photo : PAUL J. RICHARDS/AFP via Getty Images) 
A computer terminal is seen at the Department of Homeland Security new ICE Cyber Crimes Center expanded facilities in Fairfax, Virginia July 22, 2015. The forensic lab combats cybercrime cases involving underground online marketplaces, child exploitation, intellectual property theft and other computer and online crimes.

AI Hampers US Law Enforcer's Efforts Against Online Child Abuse

However, according to a Guardian investigation, social media companies using AI for content moderation are making child sexual abuse detection and reporting harder, perhaps allowing offenders to escape prison.

US law requires social media businesses to submit CSAM to the National Center for Missing and Exploited Children. NCMEC received over 32 million reports of suspected child sexual exploitation, including 88 million photographs, videos, and related materials, from both private and public sources in 2022.

Meta, which includes Facebook, Instagram, and WhatsApp, generates 84% of these reports, with over 27 million.  These sites' AI algorithms identify questionable content, which human moderators analyze before reporting to NCMEC and law enforcement. Notably, law enforcement authorities may only access AI-generated CSAM reports after getting a search warrant from the reporting firm, which might take days or weeks.

Vice President of NCMEC's analytical services section, Staca Shehan, noted legal constraints: "If the company has not indicated that they viewed the file before reporting it to NCMEC, law enforcement cannot open or review it without legal process."

Over a decade ago, judges decided that NCMEC's investigations were governmental action, requiring Fourth Amendment privacy protections over arbitrary searches and seizures.

Child safety and legal experts warn that such delays can compromise investigations, resulting in the loss of evidence and endangering children.  One anonymous assistant US attorney noted that such delay poses a high risk to community safety as this enables criminals to "continue their activities undetected, placing every child at risk."

Despite the challenges, AI helps police and groups fight online child exploitation.

The National Center on Sexual Exploitation's senior vice president, Dr. Marcel Van der Watt, called AI's function "augmented intelligence," with chatbots interacting with pedophiles online, helping law enforcers identify and pursue criminals. 

Related Article: Pentagon to Launch Supercomputer Cloud Service for US Military, Enhancing Remote Access 

byline quincy

ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion