More Than 100,000 ChatGPT User Accounts Compromised by Malware, Reveals Dark Web Report

The compromised accounts were discovered through data gathered from the dark web marketplace.

According to a recent report from cyber intelligence firm Group-IB, over 101,000 user accounts on ChatGPT, the popular AI-powered chatbot platform, have been compromised by information-stealing malware over the past year.

The findings were obtained from data gathered on various underground websites in the dark web marketplace.

Info-Stealing Malware Targets ChatGPT Accounts

Group-IB's analysis unveiled a staggering number of info-stealer logs that contained ChatGPT account credentials, BleepingComputer reports.

The peak in compromised accounts was observed in May 2023, with threat actors posting a concerning 26,800 new ChatGPT credential pairs.

The report also highlighted the geographical distribution of the compromised accounts.

The Asia-Pacific region accounted for nearly 41,000 compromised accounts between June 2022 and May 2023, followed by Europe with almost 17,000. Surprisingly, North America ranked fifth, with approximately 4,700 affected accounts.

A Look at Info-Stealing Malware

Information stealers are malware designed to target and extract account data from various applications, including email clients, web browsers, instant messengers, gaming services, and cryptocurrency wallets.

These malicious programs aim to steal credentials stored within web browsers by extracting them from the program's SQLite database and exploiting encryption reversal techniques.

Read Also: Consumer Groups Urge EU to Launch Investigations Into Risks of Generative AI

The stolen credentials and other pilfered data are packaged into archives called logs, which are then sent back to the attackers' servers for further exploitation.

In related news, we reported in April that ChatGPT may be used to create sophisticated malware capable of collecting data from Windows computers.

According to reports, Forcepoint security researcher Aaron Mulgrew stated that he could construct the malware in a matter of hours using prompts generated by ChatGPT.

The Threat Against AI-Powered Tools

The compromise of ChatGPT accounts, email accounts, credit card data, and cryptocurrency wallet information demonstrates the growing significance of AI-powered tools for individuals and businesses.

ChatGPT's ability to store conversations means that unauthorized access to an account can expose proprietary information, internal business strategies, personal communications, software code, and more.

"Many enterprises are integrating ChatGPT into their operational flow," explains Dmitry Shestakov, an expert from Group-IB.

"Given that ChatGPT's standard configuration retains all conversations, this could inadvertently offer a trove of sensitive intelligence to threat actors if they obtain account credentials," Shestakov added.

A similar scenario happened at Samsung when employees unintentionally released classified information while using ChatGPT.

According to reports, Samsung's semiconductor division permitted engineers to use the service to assist them in resolving issues with their source code.

Concerns regarding the potential risks associated with ChatGPT have led tech giants like Samsung to implement strict policies prohibiting using the platform on work computers. Employees who fail to comply with this policy face the possibility of employment termination.

Mitigating Risks

Group-IB's data reveals a steady growth in stolen ChatGPT logs over time. Among the various information stealers identified, Raccoon steals the spotlight, accounting for nearly 80% of all records. Vidar and Redline follow with 13% and 7%, respectively.

To mitigate the risks associated with ChatGPT, users are advised to disable the chat saving feature in the platform's settings menu.

Alternatively, manually deleting conversations immediately after use can also help safeguard sensitive information.

However, it is important to note that many information stealers employ tactics such as taking screenshots or keylogging, which can compromise data even if chat conversations are not saved.

Stay posted here at Tech Times.

ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion
Real Time Analytics