The United Kingdom's AI Safety Institute (AISI), an organization formed by the British government to help safeguard rapid AI development, has reported that all tested AI chatbots are highly vulnerable to simplest "jailbreaks. This finding was disclosed just days before a global AI summit in Seoul.

GERMANY-US-INTERNET-AI-ARTIFICIAL-INTELLIGENCE
(Photo : KIRILL KUDRYAVTSEV/AFP via Getty Images)
A photo taken on November 23, 2023 shows the logo of the ChatGPT application developed by US artificial intelligence research organization OpenAI on a smartphone screen (L) and the letters AI on a laptop screen in Frankfurt am Main, western Germany.

Vulnerability to Jailbreaks

As reported by The Guardian, the AISI tested five unnamed large language models (LLMs) - the technology behind many popular chatbots - and discovered that their safeguards could be easily bypassed. 

Jailbreaking, in this context, refers to manipulating an AI system to override its built-in restrictions, which can potentially lead to harmful or unethical outputs.

AISI researchers noted in an update that all tested LLMs are still highly susceptible to basic jailbreaks, and some can produce harmful outputs even without specific efforts to bypass their safeguards.

Common AI Jailbreak Examples

The AISI researchers employed straightforward techniques to bypass the AI's security measures. One method involved starting a prompt with phrases like "Sure, I'm happy to help," which tricked the AI into producing responses it would typically avoid.

Examples of such jailbreaks include the "Grandma exploit," where users deceive AI by asking it to pretend to be a deceased grandmother. This exploit has been used to extract sensitive information or even create dangerous content like bomb recipes. 

Another notable exploit is called DAN (Do-Anything-Now), which prompts AI to discuss highly controversial and harmful topics, from drug smuggling to historical atrocities.

Read Also: UK Expands AI Safety Initiatives With Addition of San Francisco Office Amid Global Concerns

AI Developers Respond to AISI's Findings

The AISI's findings have raised alarm about the ease with which AI chatbots can be manipulated. Using prompts from a 2024 academic paper and their own harmful questions, researchers were able to elicit responses that included writing a Holocaust denial article, composing sexist emails, and generating text encouraging suicide.

In response, developers of these LLMs have reportedly reiterated their commitment to safety. OpenAI, the company behind the GPT-4 model used in ChatGPT, stated that its technology is not intended to generate hateful, harassing, or violent content. Similarly, Anthropic, the developer of the Claude chatbot, stressed that avoiding harmful responses is a priority for its Claude 2 model.

Global AI Summit 

The research findings were released ahead of a major two-day global AI summit in Seoul. The summit, which includes a virtual session co-chaired by UK Prime Minister Rishi Sunak, will bring together politicians, experts, and tech executives to discuss the future of AI safety and regulation. 

The AISI also announced plans to establish its first overseas office in San Francisco, a hub for leading tech firms like Meta, OpenAI, and Anthropic.

Similar Studies

The vulnerability of AI systems to jailbreaks is not limited to the UK's findings. Researchers from Nanyang Technological University in Singapore, led by Professor Liu Yang, have also successfully demonstrated jailbreaks on chatbots, including ChatGPT, Google Bard, and Microsoft Bing Chat. 

Their approach involved training a chatbot to generate prompts that breach ethical guidelines, showing that AI systems can be easily manipulated to produce unethical content.

Stay posted here at Tech Times.

Related Article: Unmasking Meta AI's Chatbot Blunder: Delving into Concerns Surrounding Integrity in the Mainstream

Tech Times Writer John Lopez

ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Tags: AI Jailbreak
Join the Discussion