With ongoing concerns about how big tech companies use data for AI training, Slack users are increasingly upset with the Salesforce-owned chat platform's approach to its AI initiatives.
Like many tech firms, Slack trains AI services using user data. However, users discovered that opting out of this data usage requires emailing the company, a detail buried in an outdated and confusing privacy policy.
Under Attack Over AI Training Policy
Slack users were shocked to learn that their messages are being used to train AI models, prompting the company to announce impending policy changes. After launching Slack AI in February, the platform is now under scrutiny for its default practice of using customer data, including messages, content, and files, to train its global AI systems.
The controversy erupted when an annoyed user highlighted the issue on a popular developer community site, causing the post to go viral. This backlash has pressured Slack to clarify its data practices and update its privacy policies.
This led to a big discussion among Slack users, who were shown that they were automatically signed up for Slack's AI training program. To opt-out, they need to send an email to a specific address. This came to light on Hacker News and sparked conversations on different platforms.
People questioned why Slack AI, a newer tool that helps users search and summarize conversations, isn't clearly mentioned on the privacy policy page. They also asked why Slack uses terms like "global models" and "AI models" without explaining them clearly. This has led to calls for the company to be more straightforward about its policies.
Slack's Response
Aaron Maurer, a Slack engineer, clarified that while Slack does not train its large language models (LLMs) on customer data, the current policy might be too vague.
He acknowledged on Threads that the policy needs revision to explain better how privacy principles are applied to Slack AI, noting that it was originally written for search and recommendation features developed long before the introduction of Slack AI.
Maurer addressed concerns about data-sharing policies in response to engineer and writer Gergely Orosz, who called for companies to clarify their policies in official documents rather than blog posts. Orosz highlighted the inconsistency in Slack's privacy terms and the actual use of customer data.
Slack's privacy principles state that they use Machine Learning (ML) and Artificial Intelligence (AI) in limited ways to enhance their product, analyzing customer data like messages, content, and files. However, the Slack AI page claims that user data is not used to train Slack AI models.
Also read : Slack Launches AI-Powered Chatbots Which Can Assist You With Writing, Taking Down Notes on Calls
This discrepancy has led to confusion and frustration among users, who now demand that Slack update its privacy principles to clearly explain how data is used for Slack AI or any future AI developments.
Salesforce, the parent company of Slack, has acknowledged the need for an update to address these concerns.
The situation at Slack highlights the crucial need for transparency in the fast-evolving AI landscape. User privacy must be a priority, and companies should clearly outline in their terms of service how and when user data is utilized or if it isn't.
Related Article : Gizmodo Writer Goes Incognito as 'Slackbot' Without Detection for Months