Cybersecurity experts are reportedly warning that artificial intelligence (AI) poses a significant risk to security after they found that AI chatbots could soon easily fool humans. People are increasingly relaxing regarding booming technology.

According to Javvad, the lead security awareness advocate at KnowBe4, people will become accustomed to artificial intelligence. That could make them less defensive, giving AI more ability to control us.

Lazarus Group Still Exploits Log4Shell: What Are Andariel's Recent Cyberattacks?
Lazarus, a group of North Korean hackers is creating Trojan malware written in "D" programming. Andariel hacking group is also seen to have a link to the notorious crew of threat actors.
(Photo : Ilya Pavlov from Unsplash)

Scientists warned earlier this year that AI had become skilled at "deception" and had discovered how to "cheat" people. Additionally, scientists have told sources that cybercriminals may "manipulate" AI.

Javvad cautions that when individuals become more accustomed to using AI chatbots, they may become more receptive to every response. The cybersecurity advocate said that training, knowledge, and education are necessary to guard against these dangers.

The rapid advancement of AI intelligence is a major contributing factor to the issue. Furthermore, it's challenging for the average person to stay abreast of the advancements and be aware of the threats. According to Javvad, this exposes common people.

Read Also: UK Government Must Track AI Incidents to Avoid Future Crises, Report Suggests 

AI Bot Scams

Even more worrisome are new reports that indicate AI bots can now obtain a user's login credentials by sending strange calls to their targets. They now know how to go after people who have enabled two-factor authentication.

The perpetrators of these attacks prepare the victim's credentials before the AI call, which allows the bots to intercept and steal the one-time password (OTP).  

It was found that dishonest people are participating in fraud by paying $420 weekly for cryptocurrency subscriptions. AI bots are provided to them to handle their calls. Initially, the con artists get an individual's login credentials, which include usernames, email addresses, and passwords.

Subsequently, the malevolent actors would activate a spoofing system that would prompt victims to enter their OTPs over the phone, which would then automatically forward the information to the threat actor's Telegram bot.

AI Hacking and Malware

The Government Communications Headquarters (GCHQ) of the United Kingdom also issued a warning in early 2024, stating that the rate at which AI is developing will probably lead to a rise in cyberattacks, including ransomware attacks and phishing scams, globally in the next two years. AI will make it easier for inexperienced hackers to cause harm online.  

According to the report, threat actors' social engineering skills will be primarily enhanced by AI. Generative artificial intelligence (GenAI) can be used to generate convincing papers that fool victims into responding to a phishing email without the requirement for translation, spell check, or grammar checks, which are common signs of online fraud. 

In another worrying update, the most recent Blackberry cybersecurity study claims that malware risks are growing alarmingly, with almost 7,500 new types developed daily.

The first quarter of 2024 saw a 40% spike in attacks based on new malware varieties, 5.2 new viruses every minute, or roughly 7,500 attacks daily, based on the company's preliminary telemetry. This, along with the advent of AI-powered deception, is undoubtedly a worrying development for people and cybersecurity.

Related Article: Fintech Firm Wise Alerts Customers to Potential Data Exposure in Evolve Bank Breach 

Written by Aldohn Domingo

ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion