Chatbots Are Still 'Hallucinating' With Misinformation, And It Is Time to Rely on Real News Instead of AI

AI is powerful and smart, but do not expect it to be accurate and reliable at all times.

OLIVIER MORIN/AFP via Getty Images

There is no denying that artificial intelligence is advanced, powerful, smart, and offers many more capabilities or traits than any other technology, but bear in mind that it is still hallucinating and sharing misinformation. AI chatbots were made to always have an answer to share with users whenever they asked a question, and this led to many who grew fond of the tech as their alternative to search engines.

However, while AI can answer, it may sometimes give users hallucinations or fabricated information just for the sake of not 'failing' those who asked, but it is time to revert to relying on real news and legitimate sources rather than the machine.

AI Chatbots Are Still Hallucinating Misinformation, Fake News

AI chatbots still tend to share misinformation, and it remains one of the largest phenomena in the modern world how does something so advanced tend to create fake or fabricated data for the sake of answering questions? However, the real reason behind their hallucinations is not exactly what people expect, particularly as one may think that the chatbot only wants to please their masters, a.k.a. humans, and avoid coming back empty-handed.

In a report by The Verge, even in cases as recent as now, AI is still hallucinating, and some prominent people rely on the likes of OpenAI's ChatGPT for news and information, despite their clear problems. Despite AI already having access to the internet and is now building towards more advanced models that can either understand the context or deliver 'reasoning,' chatbots may still revert to hallucinations even with the many fixes or solutions to their problems.

There are many reasons why AI software hallucinates, and this includes lack of contextual understanding, bad prompts, lack of access to updated data, insufficient training, and more that still leads to one answer: bad programming. It is safe to say that for now, users should be vigilant in their use of AI chatbots and the like as they still have the possibility to hallucinate, especially in their present state.

It Is Time to Stop Using AI as Search Engines, News Sources

These holidays may be a massive moment for artificial intelligence as the likes of OpenAI and Google presented their latest models, but despite these advancements, there is still the underlying problem. With that, users should still rely on real news that was written by humans, especially from trusted sources, particularly as they are more credible, reliable, and safe.

Real news from trustworthy publications still relies on research, legitimate sources, and evidence to back up their claims, but there is still reason to stay vigilant as other websites rely on AI to create articles for them.

Artificial Intelligence and Its Hallucinations

OpenAI went forth to introduce the o1 models, as well as other further projects in this series, and what the company presented was an advancement to their AI as it can "reason" with users which makes it far more advanced than its current GPTs. Users are now given the chance to access it if subscribed to ChatGPT Pro, but those who are on the lower tiers or have free access, still get GPT-4o.

Moreover, Google also released its Gemini 2.0, the most advanced version of its renowned multimodal AI for the public to access, and the company also went forth to bring its experimental model known as Gemini 2.0 Flash Thinking which is its answer to OpenAI's 'reasoning' model.

However, despite all of these advancements, there is still a word of caution below the chatbots saying that they may make mistakes with their responses.

ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion
Real Time Analytics