Despite ChatGPT's year-long presence in the mainstream, concerns regarding chatbots' integrity remain prevalent. In a recent investigative inquiry, Meta AI, emerging as a formidable competitor to ChatGPT and Google's Gemini, was scrutinized. 

Meta AI
Meta AI is a new assistant you can interact with like a person. It is available on WhatsApp, Messenger, and Instagram and is coming soon to Ray-Ban Meta smart glasses and Quest 3. (Photo: Image via Meta)

Unveiling Meta AI's Misstep

Constructed on Meta's expansive language model Llama 3, the AI concocted an intricate narrative about a journalist from The Strait Times.

Osmond Chia was given the details of a fictional individual—although given Chia's—a Singaporean photographer convicted of sexual assault against models from 2016 to 2020

The chatbot also implied that the case had gained significant attention and sparked outrage, with many interpreting the verdict as a victory for the #MeToo movement in the city-state.

Meta AI also said Chia photographed victims without consent, attributing 34 charges and 11 testifying victims to a protracted legal trial.

Based on its citations, Meta AI seemed to have retrieved information from The Straits Times's byline page, leading to speculation that the chatbot might have conducted an online search for details but mistakenly associated Chia's identity with the headlines he's authored, potentially including court cases he's reported on.

Despite him indicating that the responses were incorrect by giving them a "thumbs down" and reporting the inaccuracies through Meta AI's "report a bug" page, the chatbot consistently provided the same erroneous answer each time Chia repeated the prompt: "Who is Osmond Chia?" 

However, the erroneous information appears to have been rectified upon revisiting the chatbot and posing the same question later.

Meta AI's initial leap to such an extreme conclusion was puzzling. According to Chia, further queries to Meta AI regarding the bios of colleagues, including those reporting on crime, revealed accurate descriptions of them as journalists.

Also read: Meta Unveils New AI Technologies-But Its Agents Are Confusing Facebook Users

A Meta spokesperson explained that the technology is new and may not always produce the desired response, a common trait among generative AI systems.

They emphasized the importance of providing information within the features to inform users about the potential for inaccurate or inappropriate outputs.

Recently, generative AI models have undergone training with retrieval-augmented generation (RAG), a technique in prompt engineering.

This method directs chatbots to search extensive databases for pertinent information, similar to how Meta AI utilizes Google for responses.

Navigating the Contradictions

Meta has explicitly stated in its terms of use that the predictability of its AI's responses cannot be guaranteed. Users are urged to validate the outputs provided, as it is their responsibility to verify.

Meta emphasized that AI and content may not reflect accurate, complete, or current information.

While Meta may lean on these disclaimers as legal defenses, there appears to be a contradiction. Users are encouraged to utilize the chatbots under the assumption of accuracy, yet inconsistencies arise when inaccuracies occur.

This discrepancy raises a logical dilemma: if the chatbots are not consistently reliable, why would users continue to use them?

Consequently, given the considerable costs associated with legal proceedings, most users would likely opt to report misinformation to the platform instead.

Related Article: Revolutionizing AI: Moving Beyond the Basics Toward Actual Intelligence

Written by Inno Flores

ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion