Mark Walters, a radio host from Georgia, has filed a complaint against OpenAI in the Superior Court of Gwinnett County.
The lawsuit alleges that ChatGPT, an AI chatbot developed by OpenAI, provided false and defamatory information regarding a legal case that Walters was reporting on.
The case, known as The Second Amendment Foundation v. Robert Ferguson, is unrelated to Walters but was misrepresented by ChatGPT.
All About the Lawsuit
According to the complaint, on May 4, 2023, Fred Riehl, a journalist, and subscriber of ChatGPT, engaged with the platform to discuss the Lawsuit. Riehl shared a correct URL link to the complaint on the Second Amendment Foundation's website and requested a summary of the accusations.
ChatGPT reportedly responded with a fabricated summary, falsely implicating Walters in defrauding and embezzling funds from the foundation. The summary included erroneous details about Walters' position and actions within the organization.
The lawsuit clarifies that Walters has no involvement in the Lawsuit and is not accused of any wrongdoing. He neither serves as the treasurer nor the chief financial officer of the Second Amendment Foundation nor has he ever held those positions.
The false information provided by ChatGPT did not align with the actual complaint, which focused on different individuals and claims unrelated to financial accounting.
Moreover, when Riehl requested a copy of the relevant portion of the complaint related to Walters, ChatGPT provided an entirely fabricated text, complete with an erroneous case number.
The complaint further alleges that OpenAI is aware of ChatGPT's tendency to generate false information, referred to as "hallucination." It accuses OpenAI of negligence in communicating the defamatory statements to Riehl without conducting proper fact-checking or exercising due diligence.
The lawsuit claims that the company published libelous material about Walters, which damaged his reputation and caused him harm.
Read Also : Can ChatGPT's Cancer Information Be Trusted? New Study Says It's 97% Accurate-But, Here's the Catch
Testing the Legal Framework
However, it must be noted that OpenAI includes a disclaimer on ChatGPT's homepage, acknowledging the system's potential to generate incorrect information.
Additionally, the knowledge cutoff of ChatGPT is September 2021. This means that the model was trained on a vast amount of data up until that date and did not possess information or updates beyond that point.
However, the legal precedent regarding holding a company responsible for false or defamatory information generated by AI systems remains uncertain.
The Verge notes that In the US, internet firms are typically shielded from liability under Section 230 for third-party content hosted on their platforms. It is unclear whether these protections extend to AI systems that not only link to existing data sources but also generate new information, including false data.
The defamation lawsuit filed by Walters in Georgia could potentially test the legal framework in this regard.