California lawmakers are proposing groundbreaking legislation that requires artificial intelligence (AI) companies to rigorously test their systems and implement safety measures to prevent manipulations that could cripple the electric grid or enable chemical weapons production. As technology improves rapidly, such situations may become possible, say experts.
On Tuesday, there will be a vote on this ground-breaking proposal to lessen AI risks. The parent companies of Facebook and Instagram, Meta and Google, strongly oppose the plan. Google, strongly opposes the plan. According to AP News, tech companies argue that the regulations unfairly target developers rather than those who misuse AI systems for malicious purposes.
California Positioning as AI Regulation Pioneer
Authored by Democratic State Sen. Scott Wiener, the bill aims to establish reasonable safety standards to avert "catastrophic harms" from compelling AI models anticipated to emerge in the future. The requirements would apply exclusively to AI systems exceeding $100 million in computing power for training. As of July, no current AI models have met this threshold.
Wiener emphasized during a recent legislative hearing that the matter is "not about smaller AI models" but is "about huge and powerful models that, as far as we know, do not exist today" but will exist soon.
Democratic Gov. Gavin Newsom has positioned California as a leader in AI adoption and regulation, highlighting the potential deployment of generative AI tools to alleviate highway congestion, enhance road safety, and provide tax guidance, per The Mirror.
Newsom's administration is also considering new regulations to combat AI discrimination in hiring practices. While Gov. Newsom refrained from commenting on the bill, he cautioned that excessive regulation could jeopardize the state's competitive edge.
Tech Companies Express Concerns
AI experts endorse the creation of a state body to monitor AI developers and create best practices. It authorizes the state attorney general to sue for noncompliance. However, a consortium of tech firms claims that new laws will hamper major AI system development and the open-source community.
According to a report from ABC News, Rob Sherman, Meta's vice president and deputy chief privacy officer, expressed in a letter to lawmakers that the legislation will make the AI ecosystem "less safe, jeopardize open-source models" that startups and small businesses rely on depend on "standards that do not exist, and introduce regulatory fragmentation."
The California Chamber of Commerce echoed these concerns, warning that the proposal could drive companies out of the state to evade stringent regulations.
Opponents of the bill advocate for more comprehensive federal guidance. Conversely, supporters argue that California cannot afford to delay, pointing to the mistakes made by not regulating social media companies promptly.
In parallel, state lawmakers are also considering another ambitious measure aimed at addressing discrimination in automation. This measure targets the use of AI models in screening job resumes and rental applications, seeking to ensure fairness in these processes.
Related Article : AI-Powered Health Screening, Expanded to Vietnam by Fujifilm Holdings