OpenAI has announced its newest "Preparedness Framework" to mitigate risks and misuse of its AI systems. An initial AI Safety Plan that integrates an advisory group to other teams for safe AI models. Reuters reports that the framework will allow the board to reverse safety decisions.

The Straits Times adds to the story, reporting the plan will also support multiple teams overseeing AI safety within the AI Giant. 

ChatGPT Creator OpenAI Hosts First Tech Showcase, Unveiling New AI Products
(Photo : SEBASTIEN BOZON/AFP via Getty Images)
OpenAI, the company behind the popular chatbot ChatGPT, recently hosted its first big tech showcase, unveiling an array of new AI products.

According to Reuters, the Microsoft-backed company will support the regulation that limits the use of its newest technology to locations that are considered safe. Additionally, the business is forming an advisory council to examine safety reports before sending them to the board and management. The board may overturn the CEO's decision. 

The business is vigilant for risks that it categorizes as catastrophic, which it describes as any danger that might cause economic losses amounting to hundreds of billions of dollars, serious injuries, or numerous fatalities, according to its rules.  

Read Also: OpenAI's ChatGPT Will Now Know Licensed Global News Content, Thanks to Axel Springer 

OpenAI's Safety Teams

The "preparedness" team at OpenAI will reportedly be continuously assessing the performance of its AI systems in four areas, including possible cybersecurity risks and chemical, nuclear, and biological threats, to reduce any potential risks associated with the technology.

The announcement also states that the Safety Systems team would reduce the abuse of existing models and technologies, such as ChatGPT. On the other hand, the company's goal of having safe superintelligent models in the far future is being worked on by its so-called Superalignment team.

All three safety teams will reportedly follow different timeframes and risks.

Straits Times adds that the head of the readiness group, Aleksander Madry, told Bloomberg News that his team will submit a monthly report to a new internal safety advisory board. After reviewing the performance of Mr. Madry's team, that group will make suggestions to Mr. Altman and the company's board. 

OpenAI's Standards

According to Mr. Madry, as per the report, his team will continuously assess the most sophisticated, unpublished AI models from OpenAI and assign a risk rating of "low," "medium," "high," or "critical" based on various potential hazards. The team will also make adjustments and assess their efficacy to lessen any risks they identify in AI. The revised restrictions state that only models graded as "medium" or "low" will be released by OpenAI. 

Based on these findings, Mr. Altman and his leadership team can decide whether to release a new AI system, but the board can change their minds. 

According to Mr. Madry, he expects that other businesses would assess any risks associated with their AI models by using OpenAI's standards. He said that the rules formalize the procedures OpenAI has previously used to assess AI products it had already made available. He said that during the previous few months, he and his team worked out the specifics and solicited input from other OpenAI members.  

Related Article: OpenAI's Superalignment Team Tackles the Challenge of Controlling Superintelligent AI

Written by Aldohn Domingo

ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion