In a setback to OpenAI's attempts to safeguard the safe development of superintelligent AI, many key Superalignment team members recently resigned.

Several members of OpenAI's Superalignment team, including co-lead Jan Leike, quit this week, citing resource distribution issues.

According to a team member, OpenAI promised the group 20% of its computational resources, but they often received only a fraction, which hindered their study, according to TechCrunch.

SuperIntelligent AI: A Threat to Humanity?

On Friday, Leike, who worked at DeepMind and developed ChatGPT, GPT-4, and InstructGPT, explained his resignation, stating that he and OpenAI finally "reached a breaking point," as he had been "disagreeing" for some time with the OpenAI management regarding "core priorities." He claimed that future AI model preparation ignored security, monitoring, and social effects.

Leike and OpenAI co-founder Ilya Sutskever, who resigned last week, directed the July-formed Superalignment team. Within four years, the team aimed to technically govern superintelligent AI.

The team of scientists and engineers from several departments conducted safety studies and awarded millions to external researchers.

On X, Leike warned that creating machines more intelligent than humans is "an inherently dangerous endeavor."

 

A turbulent period at OpenAI led to Sutskever's departure. Last year, Sutskever and the prior board tried to fire CEO Sam Altman on transparency issues. Investor and employee pressure restored Altman, leading to board resignations and the computer scientist's departure. Notably, he helped the Superalignment team communicate and promote its work.

After these high-profile resignations, OpenAI co-founder John Schulman will oversee the old Superalignment team, now a loosely affiliated collection of researchers across departments, according to the report.

Meanwhile, Otto Barten, head of the Existential Risk Observatory, cautioned that uncontrolled AI may hijack internet networks and use them to pursue its aims, threatening global security and humanity.

"Uncontrolled AI could infiltrate online systems that power much of the world, accessing social media accounts to manipulate large numbers of people," Barten noted in his Time article. He also warned that AI could "manipulate nuclear weapons" and military personnel.

To reduce dangers, Barten emphasized improving global defenses against undesirable internet actors. He admitted that AI's better capacity to persuade humans presents issues, but no effective defense exists.

Due to these concerns, leading AI safety researchers at OpenAI, Google DeepMind, and Anthropic, as well as safety-focused NGOs, have adjusted their emphasis.

They now focus on producing "aligned" or safe AI rather than limiting future AI. Even though it might pose existential threats, aligned AI aims to safeguard humans.

Read Also: Senate Committee Passes Bills to Combat AI Misinformation Ahead of US Elections 

FRANCE-TECHNOLOGY-OPENAI(Photo: JOEL SAGET/AFP via Getty Images)  This illustration photograph taken with a macro lens shows The OpenAI company logo reflected in a human eye at a studio in Paris on June 6, 2023.

Americans Oppose Superintelligent AI Development

Generative AI is trendy, yet a recent study shows that Americans strongly support government limitations to prevent the creation of super-AI systems.

The Artificial Intelligence Policy Institute's September YouGov survey indicated that 63% of respondents want rules to actively avoid superintelligent AI. The poll of 1,118 American voters sought to reflect the voting population.

A crucial survey question was whether regulation should postpone AGI development. Google and OpenAI are working towards AGI. OpenAI's purpose is to "ensure that artificial general intelligence benefits all of humanity." However, the poll suggests this message may not connect with the people.

The study, reported by PC Gamer, found that 63% believe legislation should actively prevent AI superintelligence, 21% were undecided, and 16% disagreed. The data imply voters care more about stopping bad actors from acquiring harmful AI models than possible advantages.

The study also found that 67% approve of restricting research into new, more powerful AI models. Nearly 70% believed we should regulate AI as a "dangerous, powerful technology."

Respondents were not entirely opposed to AI breakthroughs. When questioned about a legislative plan to increase AI education, research, and training, 55% supported it, 24% opposed it, and the rest were unsure.

Related Article: AI Accessing Messages: Slack in Hot Water, Vows Policy Update to Quell Concerns 

byline quincy

ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion