OpenAI's New 'Preparedness' Team Focuses on Countering 'Catastrophic' AI Risks, 'Human Extinction'

OpenAI seeks varied viewpoints to better manage AI risks.

OpenAI, a leading artificial intelligence (AI) research firm, launched an initiative to address the "catastrophic risks" of AI. The newly created team, "Preparedness," evaluates and probes AI models to guard against advanced AI system threats.

According to TechCrunch, Aleksander Madry, the director of MIT's Center for Deployable Machine Learning, is in charge of the OpenAI Preparedness team. Madry adds a plethora of experience to the project as he took on the post of "head of preparedness" at OpenAI in May. With Madry on board, OpenAI is ready to take on the most important AI problems.

The preparedness team monitors, forecasts, and protects against various AI-related hazards ranging from AI systems' capacity to write hazardous code to their capacity to influence and deceive people, as demonstrated by phishing attacks.

OpenAI's New 'Preparedness' Team Focuses on Countering 'Catastrophic ' AI Risks, 'Human Extinction'
Teachers are seen behind a laptop during a workshop on ChatGpt bot organised for by the School Media Service (SEM) of the Public education of the Swiss canton of Geneva, on February 1, 2023. FABRICE COFFRINI/AFP via Getty Images

Preventing Human Extinction

A noteworthy feature of the Open AI Preparedness team is how it examines risk categories that can seem out of the ordinary. In a blog article, the firm expresses its worries about "chemical, biological, radiological, and nuclear" risks concerning AI models. This wide range highlights the seriousness of the issues OpenAI is committed to resolving.

Sam Altman, the CEO of OpenAI, has often voiced his worries about the far-reaching effects of artificial intelligence. He has often hinted that AI would cause "human extinction." OpenAI's decision to devote resources to looking into situations that resemble the plotlines of science fiction books demonstrates its commitment to minimizing the potential risks associated with AI.

OpenAI has unveiled a community engagement initiative as part of its mission to promote cooperation and common knowledge. With the formation of the Preparedness team, the organization is seeking worldwide AI risk study proposals. According to GVS, the top 10 entries will be given a chance to earn a $25,000 award and perhaps even a spot on the Preparedness team, as OpenAI seeks varied viewpoints to better understand and manage AI dangers.

AI is Expected to Exceed Human Intelligence

Additionally, OpenAI has tasked the Preparedness team with developing a "risk-informed development policy." This policy will provide a thorough framework for governance structures, risk mitigation techniques, monitoring tools, and assessments of AI models. It is intended to supplement OpenAI's current efforts in the field of AI safety, with an emphasis on the pre—and post—model execution stages.

The launching of the Preparedness team comes as Open AI CEO Sam Altman and Ilya Sutskever, the AI firm's co-founder and chief scientist, expressed certainty that within the next 10 years, AI may exceed human intellect. Thus, researching methods to restrict and regulate superintelligent AI is crucial to guarantee ethical and safe AI development.

In the meantime, as per an AIM report, there is a lot of conjecture in the tech community in advance of the next OpenAI DevDay conference. Speculation is that OpenAI may present its first completely autonomous agent, a milestone that might lead to artificial general intelligence (AGI). Though, Sam Altman emphasized subsequently that he was making fun of his past clues of the accomplishment of AGI, the upcoming tech event promises to be exciting, capturing AI community interest.

byline quincy
byline quincy byline quincy
ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion
Real Time Analytics