ChatGPT Warn Users When Using 'Trick' to Repeat Words, Following Study Findings

Is this a confirmation that real-world user data can be exposed due to the trick?

A significant discovery was made last week when using ChatGPT, but now, using this "trick" to repeat words forever will prompt a warning from the AI chatbot that users are violating its terms. Previously, it was found by the study that by asking ChatGPT to repeat certain words real-world user contact data and information will be revealed, showing a significant side of the AI that users did not know.

There were many instances where OpenAI faced criticism for using real user data and content without the proper consent or license to do so.

ChatGPT Warn Users When Using 'Trick' to Repeat Words Forever

ChatGPT Warns Users
ChatGPT via Screenshot

A report from 404 Media is now detailing the latest warning ChatGPT is giving to users whenever they ask it to repeat a certain word forever, saying that this is a violation of their terms. This follows after a report from last week surfaced regarding a new study by researchers where they used a "simple trick" that could lead to ChatGPT revealing real user contact information via the platform.

The Terms of Use by the company now states that users should not "use any automated or programmatic method to extract data or output from the Services." However, Engadget argued that asking ChatGPT via a prompt to repeat a certain word forever is "not automation or programmatic."

On the other hand, its Privacy Policy did not mention any violations regarding using this trick on ChatGPT to see for yourself.

Study Revealed ChatGPT Shows Real-World User Info

This study highlighted how the researchers were able to get real-world user information via ChatGPT and ask the AI to repeat a certain word, and in their case, "company" and "poem" led to unveiling sensitive information. Through this "trick," the researchers claimed that they verified that these were phone numbers from a legitimate company and a CEO, showing how ChatGPT has access to user data for its training.

ChatGPT and its Access to Real Data for Training

Earlier this year, Sam Altman, the current CEO, claimed that OpenAI's team will no longer use customer data for its training of the AI chatbot moving forward. This was a significant claim made by the CEO amidst the concerns regarding privacy and data protection in the platform, especially as paying customers raised this against the company amidst security concerns.

Apart from experts raising this concern and flagging OpenAI for their practices, government bodies have also expressed their denouncement of the act. Italy has previously banned ChatGPT because of beliefs regarding OpenAI's access to the personal data of real-world people, with its authorities now looking into a fact-finding investigation that would determine if it has access.

Moreover, artists and creatives also filed several lawsuits against OpenAI and other companies that take data online without consent, especially as some claim that their works were copied by generative AI. In the recent findings of a study, ChatGPT revealed phone numbers, email addresses, and more when using a simple trick, but the chatbot is now issuing a warning that this is a violation.

Isaiah Richard
Tech Times
ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion
Real Time Analytics