GPT-4 Could Worsen Cybersecurity Risks! AI Experts Explain How It Can Happen

ChatGPT's successor is not the only problem.

GPT-4 could worsen cybersecurity risks. However, tech experts clarified that ChatGPT's successor has a low chance of inventing a new cyberthreat.

GPT-4 Could Worsen Cybersecurity Risks! AI Experts Explain How It Can Happen
A particpant checks a circuit board next to an oscilloscope on the first day of the 28th Chaos Communication Congress (28C3) - Behind Enemy Lines computer hacker conference on December 27, 2011 in Berlin, Germany. Photo by Adam Berry/Getty Images

OpenAI's popular artificial intelligence models are still not accepted by everyone across the globe.

Although there are some people who are awed by ChatGPT and GPT-4, others still criticize them because of the risks they pose. Among its critics is Hector Ferran, the VP of marketing at BlueWillow AI.

GPT-4 Could Worsen Cybersecurity Risks!

According to Venture Beat's latest report, Ferran said that OpenAI's GPT-4 will not create a new security risk.

ChatGPT
This illustration picture shows the ChatGPT logo displayed on a smartphone in Washington, DC, on March 15, 2023. - Google on March 14, 2023, began letting some developers and businesses access the kind of artificial intelligence that has captured attention since the launch of Microsoft-backed ChatGPT last year. by OLIVIER DOULIERY/AFP via Getty Images

However, he believes that hackers and other bad actors can take advantage of the multi-modal AI model.

"But just as it is being used by millions already to augment and simplify a myriad of mundane daily tasks, so too could it be used by a minority of bad actors to augment their criminal behavior," explained Hector.

This means that GPT-4 is not the main problem. Ferran explained that people should know that malicious intent is not exclusive to AI technologies.

He added that all technologies can be used for "good or evil." Ferran futher stated that the security risks that AIs can pose will depend on how bad actors use these tools.

OpenAI Admits GPT-4's Risks

Via its official GPT-4 System Card document, OpenAI admitted that the AI tool poses risks.

"Known risks associated with smaller language models are also present with GPT-4, said the AI company.

It added that the GPT-4 can also create content that can possibly be harmful to users. These include advice on how to plan an attack or hate speech.

Aside from this, OpenAI also confirmed that its multi-model AI still has societal biases and worldviews that are not of widely shared values.

The AI firm said that they are still improving the GPT-4 so that these problems can be solved in the future.

In other news, Apple's MacGPT app receives a new update, which includes the GPT-4.

We also reported about the new AI-powered reforestation drones of an Australian startup that allowed it to land a $200-million deal.

For more news updates about AIs and other similar innovations, always keep your tabs open here at TechTimes.

Tech Times
Article owned by Tech Times | Written by Griffin Davis Photo owned by Tech Times
ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion
Real Time Analytics