Google has temporarily suspended its AI image creation tool in Gemini due to errors related to historical figures. The tech giant has taken a proactive stance, addressing the concerns raised by disappointed users.
In a recent blog post, Prabhakar Raghavan, Google's Senior Vice President, candidly acknowledged the missteps in a comprehensive blog post, shedding light on the challenges faced by AI technology. He noted that the Gemini conversational app, recognized as a standalone product, operates independently from Google Search and its underlying AI models. The Imagen 2 AI model powers the app's image generation feature at its core.
(Photo : ALAIN JOCARD/AFP via Getty Images) Alphabet Inc. and Google CEO Sundar Pichai speaks during the inauguration of a Google Artificial Intelligence (AI) hub in Paris on February 15, 2024.
What Went Wrong?
The Google executive highlighted the tech giant's aspiration for inclusivity during the feature's development, which involved meticulous tuning to avoid pitfalls observed in previous image generation technologies, specifically steering clear of creating inappropriate content. However, despite these concerted efforts, issues surfaced due to two critical factors.
Raghavan explained, "So what went wrong? In short, there are two things. First, our tuning to ensure that Gemini showed a range of people failed to account for cases that should clearly not show a range. And second, over time, the model became way more cautious than we intended and refused to answer certain prompts entirely-wrongly interpreting some very anodyne prompts as sensitive."
"This wasn't what we intended. We did not want Gemini to refuse to create images of any particular group. And we did not want it to create inaccurate historical-or any other-images," Raghavan, Google's Senior Vice President, Knowledge & Information, stated.
Consequently, Google promptly deactivated the image generation of people in Gemini and committed to a substantial improvement process before reactivating the feature. This includes rigorous testing to ensure heightened accuracy and reliability.
Google to Gemini Users: Be Careful
Moving forward, Raghavan cautioned users, stating, "One thing to bear in mind: Gemini is built as a creativity and productivity tool, and it may not always be reliable, especially when it comes to generating images or text about current events, evolving news, or hot-button topics. It will make mistakes. As we've said from the beginning, hallucinations are a known challenge with all LLMs-there are instances where the AI just gets things wrong. This is something that we're constantly working on improving."
While Gemini endeavors to deliver factual responses to prompts, users are advised to exercise caution. The AI tool integrates a double-check feature to assess content substantiation across the web. Nevertheless, Google recommends relying on Google Search for up-to-date and high-quality information on current events and topics, leveraging separate systems that ensure the delivery of fresh and reliable information from various sources.
Acknowledging the potential for Gemini to produce embarrassing, inaccurate, or offensive results occasionally, Google commits to taking swift action upon identifying any issues. Raghavan emphasized AI technology's evolving and beneficial nature, expressing Google's dedication to a safe and responsible rollout.
"AI is an emerging technology that is helpful in so many ways and has huge potential, and we're doing our best to roll it out safely and responsibly," he wrote.
Meanwhile, Google has dispelled rumors on social media. Recent viral posts on platforms like Twitter created panic among millions of Gmail users, suggesting the service would shut down permanently in August 2024.
TechTimes reported that the alarming message turned out to be a hoax, a recycled message from when Google discontinued Gmail Basic. Users can rest assured that Gmail will continue, and there's no need to worry about losing their email accounts or important files.