The California Initiative for Technology and Democracy is reportedly looking to both learn and work together with Europe in creating the state's artificial intelligence regulation.
According to David Harris, senior policy adviser at the California Initiative for Technology and Democracy, they are attempting to learn from and collaborate with the Europeans to determine how to implement AI rules.
California, home to some of the biggest AI firms, has seen its state legislators introducing at least 30 separate measures addressing various areas of AI.
(Photo: KIRILL KUDRYAVTSEV/AFP via Getty Images) A photo taken on February 26, 2024, shows the logo of the Artificial Intelligence chat application on a smartphone screen (L) and the letters AI on a laptop screen in Frankfurt am Main, western Germany.
Legislators in California, as they have in the past with EU regulations on private data, are turning to recent European legislation on AI, especially given the slim chance of matching national legislation coming from Washington.
California's proposed laws range from requiring AI producers to disclose what was used to train models to prohibit political commercials using computer-generated characteristics.
Gail Pellerin, a state assembly member, supports a law that she claims will effectively prohibit the distribution of false digital content developed by generative AI in the months leading up to and weeks after elections.
Read Also: UK Plans to Amp Up its AI Research, Regulation with Over £100M
California's AI Efforts
Last September 2023, California also addressed the regulation of generative artificial intelligence. California Governor Gavin Newsom issued an executive order directing state agencies to analyze the technology's risks and uses.
State agencies were tasked with identifying the most prominent and advantageous applications of generative AI in the state and were required to provide training regimens for state workers to use officially sanctioned generative AI technologies.
Furthermore, the order directed authorities to evaluate the technology's potential negative implications, including its influence on vulnerable groups and potential dangers to the state's key energy infrastructure.
The order cleared the path for working connections with the University of California, Berkeley, and Stanford University. These collaborations enabled an in-depth study of the influence of generative AI on the California workforce.
Europe's AI Act
Now, California looks to Europe, whose EU officially passed the world's first AI regulatory act weeks ago.
After reaching a tentative political compromise in December, the EU Parliament finally approved the regulatory framework with an overwhelming majority of 523 votes. This framework is the world's first significant collection of regulations for regulating AI technology and addressing its numerous dangers and repercussions.
Previously, journalists demanded that politicians impose strict AI regulations in the media. Even tech behemoths like Google and Apple questioned what would happen when the time came. It has now become a reality in Europe.
Thierry Breton, the European Commissioner for Internal Market, praised the EU's stance as a global leader in AI legislation.
The president of the European Parliament, Roberta Metsola, reiterated this opinion, emphasizing the necessity of striking a balance between innovation and safeguarding basic rights.
The EU AI Act divides AI technology into danger tiers, ranging from "unacceptable" to high, medium, and low hazard. This categorization system serves as the foundation for regulating AI applications, which will include steps to ensure accountability and transparency.
Related Article: UN Adopts First AI Resolution, Champions Human Rights Amidst AI Development
(Photo : Tech Times)