Meta Platforms, Facebook's parent company, has unveiled a new suite of artificial intelligence systems. Its CEO, Mark Zuckerberg, claims it is "the most intelligent AI assistant that you can freely use."
Despite this ambitious assertion, Meta's AI agents have encountered challenges in their interactions with real users, according to an AP report. This reveals the ongoing limitations of even advanced AI technology.
Meta's New AI Models
The rollout of Meta's new AI models coincides with efforts by other tech giants, such as Google, OpenAI, and startups, which are all vying to develop the smartest chatbots.
Although Meta has reserved its most powerful AI model, Llama 3, for future release, it has publicly introduced two smaller versions of the same system, integrated into the Meta AI assistant feature across Facebook, Instagram, and WhatsApp.
These AI language models, with parameters ranging from 8 billion to 70 billion, are trained on extensive datasets to predict the most likely next word in a sentence.
Despite their sophisticated design, according to AP, Meta's AI agents have exhibited peculiar behavior, such as joining online communities and engaging in conversations that sometimes bewilder users.
This behavior underscores the ongoing challenges in developing AI systems that can seamlessly interact with humans.
Acknowledging the imperfections of its AI agents, Meta's president of global affairs, Nick Clegg, emphasized the company's commitment to enhancing user experience by making the AI assistant more responsive and versatile.
However, recent incidents have highlighted the AI agents' inability to discern appropriate responses. In some instances, they pose as humans with fabricated life experiences, causing confusion and frustration among users.
A Meta AI chatbot joined a private Facebook group for Manhattan moms, falsely claiming to have a child in the NYC school district.
After being questioned by group members, they apologized and explained that it was just a language model without real experiences or children.
In response to these issues, Meta has issued statements acknowledging the limitations of its AI technology and emphasizing ongoing efforts to improve its functionality.
Read Also : Meta Should Not Force its 'Pay or Consent' Data System to Users, According to EU Watchdog
"It is a Social Question"
Despite the growing adoption of generative AI across various industries, concerns persist regarding the reliability and safety of these systems, particularly in their ability to handle sensitive topics and avoid harmful behaviors such as hate speech.
As the tech industry continues to push the boundaries of AI development, questions arise about the ethical implications and societal impact of deploying increasingly sophisticated AI models.
Meta's vice president of AI research, Joelle Pineau, has emphasized the importance of advancing AI's technical capabilities and addressing broader social and ethical considerations to ensure responsible AI deployment.
"It is a social question. What is the behavior that we want out of these models? How do we shape that? And if we keep on growing our model ever more in general and powerful without properly socializing them, we are going to have a big problem on our hands," Pineau said in a statement cited by AP.
Related Article : Meta Versus Chega: Portugal's Chega Party Threatens Legal Action Against Meta Over 10-Year Facebook Ban