"Godfathers of AI" Warn Regulations on Booming Technology are Insufficient

According to experts, AI safeguards need to be strengthened.

Artificial intelligence continues to evolve rapidly, and even experts are reportedly warning that government safeguards cannot handle significant breakthroughs in the booming technology.

25 experts, including two of the three "godfathers of AI," Geoffrey Hinton and Yoshua Bengio, who have won the ACM Turing award, the computer science equivalent of the Nobel prize, for their work, made the suggestions.

Governments require safety regulations that prompt regulatory action if goods achieve specific levels of ability, and the organization says a push by tech corporations toward autonomous systems could greatly increase the influence of AI.

To manage extreme AI hazards amid rapid advancement, scholarly research suggests government safety frameworks that impose stricter criteria if technology evolves quickly.

TOPSHOT-BRITAIN-SCIENCE-DIPLOMACY-POLITICS-COMPUTERS-SUMMIT-AI
TOPSHOT - Delegates wait to listen to speakers during the UK Artificial Intelligence (AI) Safety Summit at Bletchley Park, in central England, on November 1, 2023. LEON NEAL/POOL/AFP via Getty Images

(Photo: LEON NEAL/POOL/AFP via Getty Images) TOPSHOT - Delegates wait to listen to speakers during the UK Artificial Intelligence (AI) Safety Summit at Bletchley Park, in central England, on November 1, 2023.

It also demands additional support for recently founded organizations like the AI safety institutes in the US and the UK, more risk-checking requirements for tech companies, and limitations on using autonomous AI systems in critical societal roles.

The ideas' other co-authors include Yuval Noah Harari, the best-selling author of Sapiens; the late Daniel Kahneman, an economics Nobel winner; Sheila McIlraith, an AI professor at the University of Toronto; and Dawn Song, an academic at the University of California, Berkeley.

The paper, which was released on Monday, presents a peer-reviewed update of preliminary proposals made before the Bletchley meeting.

The document "Managing Extreme AI Risks Amid Rapid Progress" issues a dire warning, stating that "we" are unprepared to adequately manage these risks. The human race is devoting enormous resources to increasing the capacity of AI systems but far less to ensuring their safety and minimizing their negative effects.

Only 1% to 3% of AI articles cover safety. The paper says that reorienting is necessary for AI to be beneficial because merely advancing AI capabilities is insufficient.

Similar AI Regulation Warnings

The new paper echoes a similar demand for better AI regulation by top Japanese companies. Just this April, Nippon Telegraph and Telephone (NTT) and Yomiuri Shimbun Group Holdings reportedly demanded the quick adoption of AI regulation after the companies claimed unregulated AI could collapse social order and cause wars.

Despite emphasizing the potential advantages of generative AI in raising productivity, the Japanese corporations' published manifesto was generally skeptical of the technology.

It claimed that AI systems have already started to violate human dignity since they are occasionally made to grab users' attention without regard for truth or morality.

According to the statement, Japan should act quickly and enact regulations to protect elections and national security from the misuse of generative AI.

OpenAI's Safeguard Drawbacks

In an ominous development, AI leader, OpenAI also recently met some drawbacks in safeguarding superintelligent AI after many key Superalignment team members recently resigned.

Jake Leike, the co-lead of OpenAI Superalignment at DeepMind, which created ChatGPT, GPT-4, and InstructGPT, explained his resignation, saying that he and OpenAI had finally "reached a breaking point," since he and the OpenAI management had been "disagreeing" for some time about "core priorities," and that future AI model preparation ignored security, monitoring, and social effects.

Superalignment was founded in July under the direction of Leike and Ilya Sutskever, co-founder of OpenAI, who quit last week. The team's goal was to technically control superintelligent AI in four years.

A group of scientists and engineers from various departments conducted safety tests and granted grants to other researchers, totaling millions of dollars. Leike said on X that trying to build machines smarter than humans is harmful by nature.

ChatGPT Privacy Guide: Here Are Some Tips to Protect Your Data in OpenAI's Chatbot
Here are some tricks that you can do to have more privacy when using OpenAI's ChatGPT. Tech Times
ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion
Real Time Analytics