AI Can Deceive Cybersecurity Experts Using Misinformation, Creating Fake Reports for Data, Says Research

Artificial Intelligence (AI) can now be used to deceive cybersecurity experts from doing their work accurately and correctly, especially after researchers have discovered a surprising way to manipulate reports. In the wrong hands, this could potentially be a massive danger, especially to the computer security field, where the study is conducted.

Artificial intelligence art
Getty Images

Cybersecurity experts are not only present when a certain application or website has been hacked, and their job is continuous to look for new threats or anomalies. This means that they are always on the lookout for possible threats to protect against, looking at possible leads for current or existing threats which can potentially endanger the computer ecosystem.

There have been a lot of malware attacks in recent events, with most being linked to ransomware that involved monetary payments in exchange for their data or technology back. Recent attacks include JBS Meats which admitted to paying $11 million to the hackers and Colonial Pipeline which has been under the help of the FBI in recovering their ransom payment.

AI Can Deceive Cybersecurity Experts with Misinformation

AI
Screenshot From Pxhere Official Website

Misinformation has been rampant in our society, creating false information or fake news to deceive people into believing something, usually with a goal in mind. And with that, several people may use it with the newfound tech available to the public, including the use of artificial intelligence to make it look more legit and believable.

According to Wired's report, several researchers have looked into AI misinformation and have discovered that it could be used to manipulate more of the experts and researchers in the field. In the research conducted by a team from Georgetown University, an algorithm called "GPT-3" has been used in this research.

What GPT-3 did was to use its AI in creating different misinformation and help further the study. Artificial intelligence has helped in making the data it circulated online look legitimate, almost perfect with the data it showed, under the direction of humans. The researchers have conducted this experiment in 6 months, creating an online persona that created content to fool researchers.

Dangers of AI and Cybersecurity

The study has already exposed the vulnerability of humans, even experts, in falling prey to the hands of threat actors that may use a new way to deceive people. This may lead to its use in the cybersecurity field, fooling them into lowering their guards or push these experts into playing into the cards of the hackers for potential future access.

Another scholar has also corroborated this study, with separate research of her own, suggesting the use of transformers and deep AI in deceiving the industry with misinformation. This is the new way of turning the odds in the favor of the threat actors, particularly in making experts believe into something that will lead them to a trap instead.

This article is owned by Tech Times

Written by Isaiah Richard

ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion
Real Time Analytics