New US Legislation Could Trigger Lawsuits Against Social Media Giants Over AI-Generated Deepfakes

Senators propose legislation to hold social media companies accountable for harmful AI-generated content.

Two US senators, Josh Hawley, and Richard Blumenthal, have introduced legislation to hold social media companies accountable for spreading harmful material created using artificial intelligence (AI), Reuters reports.

Addressing Harmful Material Spread by AI

The proposed bill focuses on the emerging issue of generative AI technology, which enables the creation of highly realistic "deepfake" photos and videos featuring real individuals.

By allowing lawsuits to proceed against social media giants for claims related to AI-generated content, the legislation aims to address the potential harm caused by such material.

"We can't make the same mistakes with generative AI as we did with Big Tech on Section 230," Senator Hawley said in a statement.

"When these new technologies harm innocent people, the companies must be held accountable. Victims deserve their day in court, and this bipartisan proposal will make that a reality," the lawmaker added.

Challenges with Section 230, Raising Concerns

Section 230 of the Communications Decency Act grants internet companies immunity, shielding them from liability for the content posted on their platforms.

However, recent Supreme Court rulings have highlighted the need to reevaluate the scope of this immunity. The defeat of two landmark cases that could have narrowed Section 230's protection has spurred efforts to reform the law.

Calls for reform have come from both sides of the political spectrum, with concerns about the power of tech giants like Google and Meta Platforms and their ranking algorithms' impact on content distribution.

"AI companies should be forced to take responsibility for business decisions as they're developing products without any Section 230 legal shield," Senator Blumenthal told Axios.

"This legislation is the first step in our effort to write the rules of AI and establish safeguards as we enter this new era," he adds.

The proposed legislation seeks to create an AI carve-out within Section 230, acknowledging the need for increased accountability for AI-generated content and addressing the challenges of generative AI technology.

No Section 230 Immunity for AI Act

The No Section 230 Immunity for AI Act, introduced by Senators Hawley and Blumenthal, aims to amend Section 230 to remove immunity for AI companies in civil claims or criminal prosecutions related to generative AI.

The bill empowers individuals harmed by generative AI models to sue AI companies in federal or state courts. It provides a legal avenue for seeking justice and holding AI companies responsible for their actions.

The bipartisan support for this legislation is significant, as it indicates a potential breakthrough after years of stalled legislative efforts in the tech sector.

The proposal reflects the growing recognition that generative AI technology requires specific regulations and safeguards to prevent the spread of harmful content.

The senators' efforts align with recent calls for AI platform accountability and the establishment of rules and frameworks for regulating AI technology.

Stay posted here at Tech Times.

ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion
Real Time Analytics