Connecticut AI Bill on Deepfakes Stalls for Being Too Strict

AI bills remain debatable.

A bill in Connecticut meant to regulate artificial intelligence-generated deepfakes has reportedly been questioned by legislators due to its strictness, possibly stifling innovation.

State and local laws are lax, and as election seasons in other nations have neared, voters, too, have been deluged with so-called deepfakes.

Though few governments have been able to approve a framework, legislators in Connecticut and other places have attempted to control the use of artificial intelligence, including the creation of deepfakes.

The bill from Connecticut faltered because Governor Ned Lamont and others thought the regulations governing AI development were excessively onerous. However, that also results in uncontrolled deepfakes.

Hiring Actors, Spotting Fakes, and Others: Reports Reveal Truth About Deepfakes
Guatemalan pedestrians watch on TV September 11, 2001 in Guatemala City, Guatemala the moment when a second plane commandeered by unknown hijackers slammed into New York's World Trade Center. Photo by Andrea Nieto/Getty Images

(Photo by Andrea Nieto/Getty Images) Guatemalan pedestrians watch on TV September 11, 2001, in Guatemala City, Guatemala, the moment when a second plane commandeered by unknown hijackers slammed into New York's World Trade Center.

However, Michael Lynch, a professor at the University of Connecticut, cautions that generative AI is very good at fabricating stories. He also notes that voters may defend themselves against deepfakes by, first of all, not relying on social media for political news.

In addition, he cautioned against believing comments or posts on social media that purport to indicate a celebrity supporting a political candidate.

Lynch advised people to carefully examine any audio or video that purports to capture a candidate saying something unexpected or upsetting. Lynch concluded by advising people to exercise caution whenever an unknown news website or social media account breaks a significant news story.

Warnings on AI Deepfakes

Deepfakes continue to be a problem nationwide due to their deceptive capabilities, as continuously warned by intelligence agencies such as the most recent federal bulletin by the Department of Homeland Security.

Research conducted by the Department of Homeland Security and shared with law enforcement partners around the country suggests that local and international actors may utilize technology to create major impediments in the run-up to the 2024 election cycle.

Federal bulletins are sporadic messages that law enforcement partners get to inform them of specific dangers and issues.
The alert claims AI capabilities may facilitate attempts to rig the 2024 US election cycle. A variety of threat actors are anticipated to attempt to influence and cause disruption during this election cycle.

Biden on Sexual AI Deepfakes

Additionally, the Biden Administration recently called on businesses to voluntarily assist in outlawing the harmful AI capacity to halt sexually abusive deepfakes made by artificial intelligence.

On Thursday, the White House asked businesses to comply voluntarily without government legislation.

Officials anticipate that the commercial sector will be able to cease producing, distributing, and making money off of these kinds of nonconsensual AI photos-which include sexual images of children consenting to a set of specific measures.

The White House also recommended that tech companies put limitations on websites and applications that claim to allow users to take and edit sexual photos without the subject's permission.

Similarly, cloud service providers can prohibit using their services by overtly fraudulent websites and apps.

ChatGPT Privacy Guide: Here Are Some Tips to Protect Your Data in OpenAI's Chatbot
Here are some tricks that you can do to have more privacy when using OpenAI's ChatGPT. Tech Times

(Photo: Tech Times)

ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion
Real Time Analytics