Google Makes Jobs Cuts in Trust and Safety Following Backlash Over AI Chatbot Gemini

The layoffs come following backlash over Google's AI chatbot Gemini.

Google is reducing its workforce on its trust and safety team amid ongoing adjustments within the company.

Google Cuts Jobs on its Trust and Safety Team

The decision affects a small number of employees, less than 10 individuals from a team of approximately 250. According to Bloomberg, these cuts were confirmed by sources familiar with the matter, who preferred to remain anonymous due to Google's nondisclosure policy.

The trust and safety solutions group at Google plays a crucial role in ensuring the safety and integrity of its AI products. Despite the reduction in staff, Google continues to rely on this team, especially in light of recent challenges related to its generative AI tool, Gemini.

Employees have been called upon to work overtime during weekends to address issues arising from Gemini's outputs. While the layoffs won't directly affect those working on Gemini, the broader trust and safety team has been under strain due to the significant workload generated by addressing the repercussions of Gemini's missteps.

Nonetheless, Google framed these layoffs as part of a broader restructuring effort to streamline operations and focus on key priorities. The company emphasized its commitment to investing in its core objectives and future opportunities.

A Google spokesperson told Bloomberg that the restructuring would ultimately facilitate better support for AI products and could pave the way for the company to add more trust and safety roles in the future.

Despite Google's reassurances, employees working on AI safety have reportedly faced challenges and morale issues over the past year. As Google accelerates developing and deploying new products to compete with rivals like OpenAI, concerns about maintaining safety and accuracy have intensified.

In response to these challenges, Google CEO Sundar Pichai acknowledged the increased reliance on trust and safety teams and praised their efforts to address user concerns regarding Gemini's performance. Pichai highlighted significant improvements in Gemini's responses following concerted efforts by the teams.

Google Gemini's AI Bias

Google's Gemini AI recently encountered criticism due to biases observed in its outputs. Google Senior Vice President Prabhakar Raghavan pledged to address these issues, acknowledging the inherent challenges in eliminating bias from AI models.

Experts have expressed skepticism about the feasibility of completely eradicating bias from AI systems. The training data used for AI models often originates from sources like the internet, raising concerns about the presence of inherent biases.

Despite efforts by companies and research institutions to scrutinize and cleanse training data, biases may persist due to the subtle nature of linguistic structures and societal norms embedded within the data source.

"Biases can be subtle and deeply ingrained in the language and structures of the data sources, making them challenging to identify and eliminate," Christopher Bouzy, CEO and founder of Spoutible, who created Bot Sentinel, a Twitter analytics service that detects disinformation, told Tech Times in an interview.

Read more about this story here.


ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion
Real Time Analytics