Nieman Lab has discovered that OpenAI's ChatGPT is generating fake links to news stories from its partner publications. Despite licensing agreements with major news organizations, ChatGPT reportedly directs users to non-existent URLs, raising concerns about the tool's accuracy and reliability.
ChatGPT Reportedly Hallucinates Links for News Stories
Over the past year, numerous major news organizations have partnered with OpenAI, agreeing to allow ChatGPT to produce summaries of their reporting and link back to their websites. However, it was found that ChatGPT has been creating false URLs for significant stories, leading users to 404 error pages.
This problem first became known after a letter from the Business Insider union's representative to the management leaked, and further testing from Nieman Lab has confirmed that it affects at least 10 other publications, including The Associated Press, The Wall Street Journal, and The Financial Times.
During the tests, ChatGPT was prompted to link to important investigative articles from these publications, including award-winning stories. However, the AI tool often fails to provide correct links.
Nieman Lab cited an example wherein ChatGPT was asked to find information on the Wirecard scandal, ChatGPT correctly identified the Financial Times as the source but offered links to unrelated websites instead of the original articles.
This issue is not isolated to a few cases. According to Nieman Lab, when prompted for other high-profile investigations, such as The Wall Street Journal's coverage of former President Donald Trump's hush money payments, the AI again directed users to invalid links.
Media companies partnering with OpenAI have publicly stated that ChatGPT should link to their websites with proper attribution. The Atlantic and Axel Springer have both emphasized in their announcements that user queries involving their content will include direct links to the full articles.
However, it is currently unclear how OpenAI plans to ensure these features work correctly when the chatbot frequently produces inaccurate URLs.
Response of OpenAI
In response to these issues, OpenAI has acknowledged that the citation features promised in its licensing agreements are still under development. According to OpenAI spokesperson Kayla Wood, the company is working with its news partners to create a better user experience, which includes proper attribution and links to the source material.
However, OpenAI did not provide further details on how it plans to resolve the problem of fake URLs. Andrew Deck of Nieman Lab conducted the tests, and his findings suggest that ChatGPT frequently hallucinates URLs and currently lacks the ability to provide relevant links to stories by its partners.
However, Deck acknowledged that the evaluation does not constitute a comprehensive audit of ChatGPT, and further investigation into the technical factors involved is planned.
"If these URL hallucinations are happening at scale, though, OpenAI would likely need to resolve the issue to follow through on its general pitch to news publishers," Deck said.