ChatGPT Used to Create Dangerous Data-Stealing Malware, Researcher Claims

The AI-created malware is said to be on par with nation-state malware.

An unsettling new controversy has erupted around ChatGPT, the popular chatbot created by OpenAI after it was claimed that the tool was used to develop sophisticated malware capable of stealing data from Windows devices.

Fox News tells us that this claim was made by Forcepoint security researcher Aaron Mulgrew, who claimed that he could create the malware in a matter of hours by using prompts generated by ChatGPT.

Creating Dangerous Malware with ChatGPT

Mulgrew discovered a flaw in ChatGPT's security system, allowing him to write the malware code function by function, line by line.

After compiling all the individual functions, he had an undetectable data-stealing executable that he believes is on par with nation-state malware.

What's troubling is that Mulgrew created such dangerous malware without any advanced coding experience or the assistance of a hacking team.

The malware, according to Mulgrew, is disguised as a screensaver app that automatically launches itself on Windows devices. Once on a device, the malware searches for various files, including Word documents, images, and PDFs, and steals any data it can find.

The malware then fragments the data and conceals it within other images on the device. These images are then uploaded to a Google Drive folder, making the data theft challenging to detect.

How ChatGPT Comes Up with Complex Code

It is important to note that ChatGPT and other language models generate answers based on patterns and relationships learned from massive amounts of text data.

When these systems are presented with a prompt, they analyze the words and phrases used and compare them to their previously learned datasets.

OpenAI, the company behind ChatGPT, fed the tool 300 billion words scraped from the internet in the form of books, articles, websites, and posts.

Mulgrew's claim that ChatGPT's safeguards were insufficient to prevent his test is particularly concerning.

While the malware is not currently attacking anyone in the public domain, it raises concerns about the possibility of real hackers using ChatGPT to create dangerous malware.

Additionally, Professor Uri Gal of the University of Sydney Business School tells us OpenAI's use of data to train ChatGPT is problematic due to privacy violations and textual integrity breaches.

The data was obtained without consent and could potentially identify people and their locations. Even publicly available data can be abused if it indicates information irrelevant to its original context. Information about malware development will now be a click away, for instance.

OpenAI has yet to issue a statement on the matter, but it is hoped that the company will strengthen ChatGPT's security measures before it is too late.

Latest from OpenAI

Reuters reports that the European Data Protection Board (EDPB) has announced the formation of a task force to address privacy concerns related to artificial intelligence (AI), specifically targeting ChatGPT.

The move follows Italy's recent decision to regulate ChatGPT and Germany's commissioner for data protection, suggesting that the country could follow suit.

Stay posted here at Tech Times.

ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion
Real Time Analytics