Google Kicks Software Engineer For His Sentient LaMDA AI Chatbot Claim

Last month, a Google engineer claimed that the artificial intelligence (AI) chatbot was sentient. Upon publicly announcing it, the company immediately suspended and placed him on paid leave concerning the firm's data security policy.

Now, the search engine giant has fired Blake Lemoine and dismissed his previous claims about the LaMDA chatbot.

Google Fires Engineer Behind Sentient Chatbot Claim

Google Kicks Software Engineer For His Sentient LaMDA AI Chatbot Claim
Google has officially parted ways with the software engineer who claims that the AI chatbot is "sentient" just like humans. Aideal Hwa from Unsplash

According to a report by The Wall Street Journal, Lemoine previously said that the LaMDA chatbot is like a human being that has feelings, hence the so-called "sentient" AI.

As per the statement of Brian Gabriel, the official spokesperson of Google, on Friday, July 22, the software engineer was no longer an employee of the company. The representative said that Google "wishes Blake well."

Additionally, the company mentioned that it has already published a paper about LaMDA and its development. After thoroughly evaluating Lemoine's statement about the AI chatbot, the firm said that all of his claims were regarded to be "wholly unfounded."

To support this, several AI experts expressed their concern about Brian, citing that what he said could be "more or less" impossible given the technology nowadays.

As a throwback, Brian believed that LaMDA was not only an AI that a person could easily instruct or command. For him, this Google chatbot has its own set of emotions.

Speaking of which, he discovered it when he created a realistic conversation with LaMDA. At the time, Lemoine said that the "sentient" AI is like a seven-year-old kid who could do "bad things" in life. He also compared it to an eight-year-old child with a physics background.

Google's Statement Regarding Lemoine's Accusation

As per The Verge, Brian urged AI scientists to use LAMDA as a guiding tool for their future experiments. As proof of the chatbot's effectiveness, he posted some of their conversations on his Medium blog.

Lemoine accused Google of launching an improper investigation of his claims. The company responded in a statement below.

"As we share in our AI Principles, we take the development of AI very seriously and remain committed to responsible innovation. LaMDA has been through 11 distinct reviews, and we published a research paper earlier this year detailing the work that goes into its responsible development."

Google says that they extensively assessed Blake's claims which lack evidence. Apart from that, the company pointed out that it's "regrettable" to think that the engineer still violated their policies regarding product information despite having an "engagement" with him.

The firm assures that they will be more careful in developing the language models in the future to avoid encountering these claims.

Read Also: Trevor Project Launches Riley, a Google-Partnered AI Tool Which Simulates Teen Undergoing Mental Health Struggles

This article is owned by Tech Times

Written by Joseph Henry

ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion
Real Time Analytics