A US-funded paper is reportedly warning that advanced artificial intelligence (AI) systems could potentially lead to catastrophic risks such as human extinction whilst creating weapons of mass destruction.
The report issues a dire warning, stating that the United States government must act quickly to put safeguards in place to reduce the likelihood that artificial intelligence may cause catastrophic events, including ones that could wipe out humanity.
(Photo : MARCO BERTORELLO/AFP via Getty Images)
A photo taken on October 4, 2023 in Manta, near Turin, shows a smartphone and a laptop displaying the logos of the artificial intelligence OpenAI research laboratory and ChatGPT robot.
According to the paper, "An Action Plan to Increase the Safety and Security of Advanced AI," the emergence of AGI and advanced AI has the potential to undermine international security in ways similar to the introduction of nuclear weapons.
According to Fox News, the researchers of the study consulted with over 200 individuals over 13 months to construct the blueprint plan for intervention, which included representatives from the US and Canadian governments, prominent cloud providers, AI safety organizations, and security and computing specialists.
By fending against the grave threats to national security posed by AI weaponization and control erosion, the paper seeks to improve the safety and security of advanced AI with an intervention plan.
International Safeguards Against AI
The proposed intervention plan was reportedly created over 13 months, including input from discussions with more than 200 US stakeholders, governments from the UK and Canada; significant cloud service providers; groups dedicated to AI safety; specialists in computing and security; and official and informal ties at cutting-edge AI labs.
The general blueprint suggests that before formalizing advanced AI safeguards into law, the strategy advises first establishing interim measures. After that, the safeguards would become global.
Some proposed actions include establishing a new AI agency, limiting the amount of processing power AI is allowed to use, mandating that AI companies obtain government approval before deploying new models above a particular threshold, and perhaps even considering banning the disclosure of how powerful AI models operate through open-source licensing.
The paper cautions that because of the gravity, unpredictability, and irreversibility of the catastrophic risks AI brings, a comprehensive safety margin must be provided by the action plan. This strategy adheres to the defense-in-depth concept, which states that by combining several overlapping rules, one can provide resilience against a single point of failure.
Urgent Defenses Against AI
The US-funded report claims that there is a dire need to introduce an intervention plan as vulnerabilities to national security posed by current frontier AI development are urgent and increasing. It posits that it will only become harder to control AI's risks as more and more AI supply chain components are added.
Furthermore, the study adds that the speed at which AI is developing presently is so quick that, by the time regular policymaking is completed, events may have already taken place.
The report did clarify, though, that some of their recommendations might be flawed and should be verified by subject-matter experts. However, the researchers stressed that their action plan is the most comprehensive framework that has been put forth to date to enable an informed, efficient, and prompt reaction to the new threats that confront humanity at this momentous turning point.
While the paper cautions against advanced AI leading to human extinction, the US Justice Department has likewise taken notice of AI crimes. The department recently cautioned businesses and individuals that using AI's capabilities to further crimes like price-fixing, fraud, and market manipulation might result in heavier, harsher sentences.