Experts Sound the Alarm on Cyberattacks That Can 'Poison' AI Systems

What is AI poisoning?

A recent study conducted by computer scientists from the National Institute of Standards and Technology (NIST) and their collaborators has exposed the vulnerability of artificial intelligence (AI) and machine learning (ML) systems to deliberate manipulation, commonly referred to as "poisoning."

The findings reveal that these systems can be intentionally misled, posing significant challenges to their developers, who currently lack foolproof defense mechanisms.

Experts Sound the Alarm Over Cyberattacks That Can 'Poison' AI Systems
A recent study has exposed the vulnerability of AI and machine learning systems to deliberate manipulation, commonly known as "poisoning." Gerd Altmann from Pixabay

Poisoning AI

The study, titled "Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations," is part of NIST's broader initiative to support the development of reliable AI. The goal is to assist AI developers and users in understanding potential attacks and adopting effective mitigation strategies.

It emphasizes that while certain defense mechanisms are available, none provide absolute assurances of complete risk mitigation. Apostol Vassilev, a computer scientist at NIST and one of the publication's authors, highlights the importance of addressing various attack techniques and methodologies applicable to all types of AI systems.

The study encourages the community to innovate and develop more robust defenses against potential threats. The integration of AI systems into various aspects of modern society, such as autonomous vehicles, medical diagnoses, and customer interactions through online chatbots, has become commonplace.

These systems rely on extensive datasets for training, exposing them to diverse scenarios and enabling them to predict responses in specific situations. However, a major challenge arises from the lack of trustworthiness in the data itself, which may be derived from websites and public interactions, according to the research team.

Bad actors can manipulate this data during an AI system's training phase, potentially leading the system to exhibit undesirable behaviors. For instance, chatbots may learn to respond with offensive language when prompted by carefully crafted malicious inputs.

Attack on AI

The study categorizes four major types of attacks on AI systems: evasion, poisoning, privacy, and abuse attacks. The team observes that evasion attacks seek to modify inputs after the deployment of an AI system, thereby influencing its response.

Poisoning attacks, on the other hand, occur during the training phase by introducing corrupted data, impacting the behavior of the AI. Privacy attacks aim to extract sensitive information about the AI or its training data, while abuse attacks involve injecting incorrect information from compromised sources to deceive the AI.

The authors stress the simplicity with which these attacks can be launched, often requiring minimal knowledge of the AI system and limited adversarial capabilities. For instance, poisoning attacks can be carried out by controlling a small percentage of training samples, making them relatively accessible to adversaries.

"Despite the significant progress AI and machine learning have made, these technologies are vulnerable to attacks that can cause spectacular failures with dire consequences," co-author Alina Oprea, a professor at Northeastern University, said in a statement.

"There are theoretical problems with securing AI algorithms that simply haven't been solved yet. If anyone says differently, they are selling snake oil," she added. The study's findings can be found here.

Byline
Tech Times
ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion
Real Time Analytics