California SB 1047, a.k.A. The Controversial AI Safety Bill, Recently Passed in State’s Assembly and Senate

This new AI bill was described a 'safeguard' but not all tech companies are on board.

One of the first AI regulations in the country has been passed by the California State Assembly and US Senate, best known as SB 1047 or the "Safe and Secure Innovation for Frontier Artificial Intelligence Models Act." It significantly drew mixed reactions from the tech community, particularly in Silicon Valley, from when it was under development due to what it will require developers.

For now, the bill remains at the mercy of California Governor Gavin Newsom, as he has the final say in its approval or veto before the deadline.

AI Safety Bill Passed by California State Assembly, US Senate

AI
Google DeepMind via Pexels

California State Senator Scott Wiener announced (via a press release) that SB 1047, also known as California's AI safety bill, has been passed by the State Assembly and Senate. This bill, if passed further, will require AI companies in California to uphold various safety measures and precautions before their training of a "sophisticated foundational model."

With this bill's passing, the officials are now handing it over to California Governor Gavin Newsom for his review, and later, decide on whether to pass it into law or veto it. Gov. Newsom has until the end of September to decide on SB 1047.

Why is the AI Safety Bill Controversial in the Industry?

Various AI companies, ranging from small developers to other Big Tech names, are at a crossroads with this California bill centering on AI safety. First off, there would be a criminal penalty for those who fail to uphold its standards, especially with the guidelines that would be required from California companies.

Moreover, it also requires AI companies to establish additional safeguards that would prevent these systems from beingmodified after their development and training. It was deemed by the lawmakers that this bill is meant to prevent AI misuse and protect the general public.

AI Safety and the United States Initiatives

Artificial intelligence has seen massive dissent and support from different types of communities, particularly as some use it with bad intentions, taking advantage of its power and capabilities. With this, there have been significant proposals and regulations in the United States to improve AI safety.

American senators including Chris Coons, Marsha Blackburn, Amy Klobuchar, and Thom Tillis were the named proprietors of the 'NO FAKES Act.' It is known for proposing legislation that would safeguard artworks from various artists and prevent AIs and other systems from creating illegal digital replicas.

Moreover, there are also other works by US lawmakers behind setting up guidelines in future AI regulationswhichfocus on its responsible and safe usage in the industry. This development centers on building the foundation of future AI legislation in the country, encompassing different artificial intelligence uses.

Since California introduced its AI safety bill, there has been a resounding outcry from tech companies, including OpenAI, against this. Now, SB 1047 is in its final stages after being passed in the Senate and State Assembly, with Gov. Newsom set to review it and give an answer when September ends.

Isaiah Richard

ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion
Real Time Analytics