UK Government Must Track AI Incidents to Avoid Future Crises, Report Suggests

The suggestion comes amid continued rise of AI-driven threats.

The UK government must implement a system of recording AI misuse and malfunctions to prevent missing key instances, according to a think tank.

Research from the Centre for Long-Term Resilience (CLTR) advises the UK government to report AI issues in public services and create a UK-wide center, as reported by The Guardian.

CLTR, which focuses on government responses to crises and extreme risks, recommends an AAIB-like reporting structure for AI management. According to Organisation for Economic Co-operation and Development (OECD) statistics, news outlets have documented over 10,000 AI "safety incidents" since 2014. These occurrences cause physical, economic, and psychological trauma.

The OECD's AI safety event monitor included a deepfake video of Labour leader Keir Starmer, Google's Gemini model misrepresentations, self-driving vehicle mishaps, and a chatbot inciting an assassination.

Tommy Shaffer Shane noted that the UK government is unaware of AI events. Thus, incident reporting, which transformed aviation and medicine, is highly recommended.

The think tank recommends that the UK adopt a strong incident reporting system, like essential safety industries. No regulator exists for complex AI systems like chatbots and image generators; therefore, many AI issues fall under the radar. Labor promises to impose strict regulations on sophisticated AI businesses.

This system would quickly detect AI faults, predict future events, and coordinate speedy reactions to significant concerns. The government might also detect large-scale damages early using incident reporting.

Despite assessments by the UK AI Safety Institute, certain AI models may pose risks after deployment. Incident reporting would help the government assess its regulatory structure.

The study also highlighted that an incident reporting system would help the DSIT's Central AI Risk Function (CAIRF) analyze and report AI threats.

UK Joins Global Effort to Promote AI Safety

The UK and 10 other nations signed a declaration on AI safety cooperation in May, including tracking AI damages and accidents.

The Seoul AI Safety Summit saw tech giants Microsoft, Amazon, and OpenAI establish a major worldwide AI safety accord, per CNBC. The pact requires US, China, Canada, the UK, France, South Korea, and UAE enterprises to voluntarily build sophisticated AI models safely. These firms will disclose safety guidelines to address issues like malicious actor usage.

US-TECHNOLOGY-IT-LIFESTYLE
A person looks at Wehead, an AI companion that can use ChatGPT, during Pepcom's Digital Experience at the The Mirage resort during the Consumer Electronics Show (CES) in Las Vegas, Nevada on January 8, 2024. BRENDAN SMIALOWSKI/AFP via Getty Images

Summer Travelers Warned Over Surge of AI-Powered Scams

Separately, users are warned about increased AI-powered travel frauds during summer vacations, TechTimes recently reported.

Over the previous year and a half, Booking.com chief information security officer Marnie Wilking reported a 500 to 900% surge in worldwide phishing assaults. This surge is mostly attributable to generative AI technologies, which have advanced these assaults.

Phishers trick victims into giving over login credentials or financial information. Travel websites are attractive to fraudsters because passengers disclose personal and financial information. Scammers may now send grammatically perfect, multilingual phishing emails using ChatGPT.

To counteract this menace, Wilking recommends 2FA for online activities for tourists and hosts. This security mechanism necessitates that users validate their identity using a one-time code sent to their phone or an authenticator app.

byline quincy
byline quincy byline quincy
ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion
Real Time Analytics