Google Believes Bard ChatBot Can Leak Company's Confidential Data—Employees Shouldn't Use it!

Google recently expressed concern over employees who input sensitive company data to the Bard chatbot.

Alphabet reminds its employees to refrain from using Google's chatbot Bard since it can potentially leak its top secrets. It's safe to say that even its own creator knows its limitations at the moment.

Google Urges Employees to Stop Using Bard Chatbot

Google Bard
Mojahid Mottakin from Unsplash

Google has instructed employees not to input confidential information into AI chatbots, citing its longstanding policy on safeguarding sensitive data.

The chatbots, including Bard and ChatGPT, utilize generative artificial intelligence to engage in conversations with users and respond to various prompts. Human reviewers may review these conversations, and researchers have discovered that the AI can replicate the information it learned during training, posing a potential data leak risk.

Alphabet has also alerted its engineers to refrain from directly using computer code generated by chatbots, according to some insiders, as per Gizmodo.

In response to the concerns raised, Alphabet stated that Bard might make unwanted code suggestions, but it remains beneficial for programmers. Google also emphasized its commitment to transparency regarding the limitations of its technology.

These precautions indicate Google's efforts to mitigate any potential negative consequences arising from its software, particularly in competition with ChatGPT developed by OpenAI and Microsoft Corp (MSFT.O).

The outcome of this race between Google and its rivals could involve substantial investments and revenue from advertising and cloud services stemming from new AI programs.

Google's cautious approach also aligns with the security standards adopted by many companies, which includes warning employees about the use of publicly available chat programs, per Reuters.

Several companies worldwide, including Samsung (005930.KS), Amazon.com (AMZN.O), and Deutsche Bank (DBKGn.DE), have implemented measures to govern AI chatbot usage, according to statements provided to Reuters. Apple (AAPL.O) reportedly follows a similar approach, although they did not respond to requests for comment.

Using AI Tools At Work

A survey conducted by networking site Fishbowl, which included nearly 12,000 respondents from major US-based companies, revealed that around 43% of professionals were utilizing ChatGPT and other AI tools as of January, often without informing their superiors.

In February, Google instructed its staff testing Bard before its official launch to avoid sharing internal information with the chatbot, as reported by Insider. Now, Google is rolling out Bard to over 180 countries and in 40 languages, positioning it as a platform for creativity. The company's cautionary guidelines also extend to the code suggestions provided by the chatbot.

In the same month, Google employees criticized their own CEO, Sundar Pichai, for releasing a rushed ChatGPT-like tool.

Google confirmed that it has engaged in detailed discussions with Ireland's Data Protection Commission and is addressing inquiries from regulators. This comes after a Politico report claimed that Bard's launch in the EU had been postponed pending further information on its privacy implications.

Joseph Henry
Tech Times
ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion
Real Time Analytics