Terror Watchdog Says AI Could Be a Threat to National Security

How does AI pose a national security threat?

Artificial intelligence (AI) has raised concerns among national security experts, who warn that it could pose a threat to national security if not carefully regulated.

Jonathan Hall KC, the terror watchdog responsible for reviewing terrorism legislation in the UK, urged AI creators to move away from their "tech utopian" mindset and consider how terrorists might exploit the technology, according to a report by The Guardian.

Hall emphasized the need to design AI with strong defenses against potential malicious uses.

BRITAIN-DESIGN-TECHNOLOGY-SCIENCE
An art piece made by ultra-realistic AI robot Ai-Da is diplayed during the press preview of the London Design Biennale 2023 at Somerset House, central London, on June 1, 2023. June 1 to June 25 the Somerset House hosts London Design Biennale. BEN STANSALL/AFP via Getty Images

Terrorism Threats

Hall expressed particular concern about AI chatbots being used to groom vulnerable individuals, potentially persuading them to carry out terrorist attacks.

The security services, including MI5, are alarmed by the possibility of AI chatbots targeting children, who are increasingly becoming part of their terror caseload.

The suggestibility of humans immersed in AI environments raises worries about language manipulation and its potential to influence people's actions.

With the growing demand for AI regulation, it is anticipated that Prime Minister Rishi Sunak will discuss the issue during his visit to the US, where he will engage with President Biden and congressional figures.

In the UK, efforts to tackle the national security challenges posed by AI are escalating. To address these concerns, MI5 has established a partnership with the Alan Turing Institute, the national institution specializing in data science and AI.

This collaboration demonstrates the UK's earnest approach to addressing AI-related security issues.

Experts stress the significance of upholding "cognitive autonomy" and controlling AI systems. Alexander Blanchard, a digital ethics research fellow at the Alan Turing Institute, highlights the necessity for policymakers in defense and security to remain well-informed about AI applications and the associated threats they bring.

Blanchard underscores the importance of comprehending risks and their implications for future technologies.

Greater Transparency

Greater transparency from AI technology firms is crucial, according to Hall. Companies should disclose the number of staff and moderators employed and ensure effective guardrails are in place to prevent misuse.

Hall calls for clarity on public safety measures, urging companies, even small ones, to devote sufficient resources to safeguarding against potential harm.

Hall also suggests that new legislation may be necessary to address the terrorism threat posed by AI, particularly in relation to lethal autonomous weapons.

The danger lies in devices equipped with AI that can independently select targets, raising questions about intent and accountability. Hall warns against the potential use of such weapons by terrorists seeking deniability and the ability to launch attacks without human intervention.

As AI continues to advance, governments worldwide are seeking to strike a balance between harnessing its potential benefits and mitigating the risks it poses to national security by coming up with new laws and turning to old ones.

Byline
Tech Times
ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion
Real Time Analytics