California lawmakers reportedly introduced a new bill that would guide state agencies' use of artificial intelligence (AI).
The bill comes amid various AI legislations being introduced nationwide, almost all concerning how the technology will be used without causing significant harm to humans. The AI Accountability Act Bill, SB 896, introduced by State Senator Bill Dodd, already passed on third reading in the state Senate on May 24.
A provision in the bill would mandate state agencies to alert users when they are interacting with AI. The bill also encouraged the state to invest in AI education and develop AI competency in the state workforce.
"The legislature is famous for passing bills on businesses and everyone else, but yet is not a leader in the area and does not enact those same initiatives on it-self," said Dodd, as CBS News reported.
According to Ahmed Banafa, an engineering professor at San Jose State, with many uncertainties in the fast-evolving AI world, a law like this is an important starting point for the government, AI companies, and consumers.
AI Bills in the US
Last month, the Voting Rights Lab said that AI's rapid development has caused numerous states to introduce guardrails against it, all to be prepared for the upcoming AI-riddled elections.
The nonpartisan voting rights watchdog noted that it was monitoring over 100 bills in 39 state legislatures that had measures aimed at regulating the potential for AI to generate election misinformation.
It comes amid several high-profile incidents of "deep-fake" video technology, computer-generated avatars, and voices being used in political campaigns and commercials.
Some AI bills are progressing while others are stalling because they could suppress AI innovation. One example is the bill in Connecticut, which faltered because Governor Ned Lamont and others thought the regulations governing AI development were excessively onerous. However, this also results in uncontrolled deepfakes.
Governor Ned Lamont and others believed the rules governing AI development were unduly burdensome, which is why the Connecticut law continues to be stalled. However, it also leads to uncontrollably generated deepfakes.
FCC to Propose Rule Requiring AI Disclosures
The Federal Communications Commission (FCC) is reportedly set to propose a new policy requiring disclaimers on AI-generated political advertisements. The FCC recently filed a proposal to start the agency's regulatory process, which is expected to take many months to complete.
The FCC action aims to close a big loophole in the laws governing AI in political advertising. The proposed regulations are for cable and satellite companies and broadcast TV and radio.
If political marketers on such platforms use AI-generated content in their advertising, they would have to disclose this on air. The FCC does not regulate social media and other internet-based media, such as streaming video services.
(Photo : Tech Times)