Following his departure from OpenAI, Ilya Sutskever, a renowned researcher and co-founder of OpenAI, has announced the launch of Safe Superintelligence Inc.

Bloomberg reports that this new venture aims to develop a safe, powerful AI system under a pure research organization, departing from the commercial pressures his previous post in OpenAI faced.

OpenAI remains an independent company, but it has received billions of dollars in investments from Microsoft, most recently a $10 billion investment in January.

OpenAI's Ex-Chief Scientist Unveils New AI Venture

For months, speculation swirled about Sutskever's plans following his exit from OpenAI. His departure came after his controversial involvement in the ouster of Sam Altman as CEO of OpenAI in late 2023, a decision he later reversed by supporting Altman's reinstatement.

ISRAEL-SCIENCE-TECHNOLOGY-AI
Ilya Sutskever, Russian Israeli-Canadian computer scientist and co-founder and Chief Scientist of OpenAI, speaks at Tel Aviv University in Tel Aviv on June 5, 2023.
(Photo : Photo by JACK GUEZ/AFP via Getty Images)\

In mid-May, Sutskever formally announced his exit from OpenAI, hinting that future "personally meaningful" projects would be revealed "in due time."

Now, Sutskever has unveiled Safe Superintelligence Inc., a non-profit organization dedicated to creating advanced AI systems primarily focusing on safety.

"This company is special in that its first product will be the safe superintelligence, and it will not do anything else up until then," Sutskever stated in an exclusive interview with Bloomberg.

The organization seeks to distance itself from the commercial and competitive pressures that companies like OpenAI, Google, and Anthropic face, allowing it to focus solely on research and development.

Read Also: Vox Media, The Atlantic Partner with OpenAI to Provide Content for ChatGPT

A Focus on AI Safety

The new venture emphasizes AI safety, which Sutskever likens to "nuclear safety" rather than typical "trust and safety" measures. This distinction highlights the organization's commitment to fundamentally engineering safety into AI systems from the ground up rather than adding protective measures as an afterthought.

"Safe superintelligence should have the property that it will not harm humanity at a large scale," Sutskever explained, aiming to create an AI that promotes key democratic values such as liberty, democracy, and freedom.

Safe Superintelligence Inc. is co-founded by Daniel Gross, a former Apple AI lead and investor known for backing numerous high-profile AI startups, and Daniel Levy, an expert in training large AI models who previously worked with Sutskever at OpenAI. The organization will operate out of Palo Alto, California, and Tel Aviv, reflecting the co-founders' Israeli roots.

In essence, Sutskever's new AI venture represents a return to OpenAI's original mission. OpenAI started as a research-focused entity before evolving into a commercial organization focused on revenue-generating products due to the immense costs associated with developing advanced AI technologies.

This sentiment is shared by Elon Musk, who, in a lawsuit that he later dismissed, claimed that OpenAI had shifted away from its initial goal of developing AI openly and for the benefit of humanity. He believes the partnership with Microsoft and the focus on profit have compromised this mission.

Sutskever's new initiative, however, eschews this path, posing a significant gamble for investors who are betting on the team's ability to achieve breakthroughs without the immediate prospect of profitable products.

Despite the inherent risks, Gross remains confident, asserting, "Out of all the problems we face, raising capital is not going to be one of them."

The project aims to develop a more general-purpose AI system by pushing beyond the capabilities of current large language models, like those that power ChatGPT.

"You're talking about a giant super data center that's autonomously developing technology. That's crazy, right? It's the safety of that that we want to contribute to," Sutskever said.

Stay posted here at Tech Times.

Related Article: Drama Unfolds at OpenAI: Top Executive Quits Amid Profit vs. Safety Concerns

Tech Times Writer John Lopez

ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion