A cautionary voice emerges from Oxford University's Professor of AI, Mike Wooldridge warning users to stop telling ChatGPT their deepest secrets.
According to the AI expert, confiding in a chatbot about work grievances or political leanings could have repercussions, as every interaction contributes to shaping future iterations of the technology.
If you're reading this and you regularly use a chatbot and talk to it like your best friend, you might have a second thought the next time you chat about your wildest fantasies or your highschool crushes.
Unseen Consequences of Telling Secrets to AI Chatbot
In a report by The Guardian on Dec. 26, Wooldridge emphasizes that sharing private details with ChatGPT is "extremely unwise."
Contrary to the expectation of a balanced response, the AI tends to echo users' sentiments, telling them what they want to hear. This nuanced behavior raises concerns about the objectivity of AI and the potential for reinforcing biases.
Although OpenAI's ChatGPT can be used for mental health advice, it's still better to consult a mental health therapist on the issues you want to raise about your experience.
Since it requires specific prompts, you should accurately write the best prompt for your questions. It's also not 100% accurate when giving responses you want to hear.
AI's Lack of Empathy: Debunking Myths
As part of this year's Royal Institution Christmas lectures, Wooldridge delves into critical aspects of AI, debunking myths and addressing fundamental questions.
While exploring topics such as machine translation and the workings of chatbots, he confronts the overarching question: "Can AI ever truly emulate human characteristics?"
Additionally, Wooldridge dismisses the notion of AI possessing empathy or sympathy, challenging the perception that machines can replicate human consciousness. In his words, AI lacks the intrinsic ability to understand emotions, making the quest for consciousness in AI a futile endeavor.
"That's absolutely not what the technology is doing and crucially, it's never experienced anything," he added. "The technology is basically designed to try to tell you what you want to hear - that's literally all it's doing," the AI expert said.
Difficulty in Retracting Data From an AI System
A sobering revelation from Wooldridge warns users that anything typed into ChatGPT becomes fodder for future versions of the AI. The challenge lies in the near-impossibility of retracting data once it enters the AI system. This raises critical concerns about data privacy and the permanence of information shared with AI models.
Throughout the lecture series, Wooldridge introduces key figures from the AI realm and presents a lineup of robot companions. These robots serve as tangible examples of current AI capabilities and limitations, providing insights into the practical applications of the technology today.
Wooldridge's insights serve as a cautionary tale, urging users to tread carefully in their interactions with AI. He wants the people to understand the potential long-term implications of sharing personal information.
Earlier, OpenAI released a temporary fix for the latest flaw in ChatGPT. The problem lies in the potential leakage of the information shared on a conversation via an external URL.
Elsewhere, a new study has confirmed that AI chatbot has weak reasoning when confronted. According to the research, ChatGPT cannot defend its human-like answers despite being correct.