A Google engineer, Blake Lemoine, of the Responsible AI team claimed that the tech giant's Language Model for Dialogue Applications (LaMDA) is "sentient." Google was quick to disagree with the assessment.
Lemoine was placed on leave last week after he published transcripts of conversations between him, a Google collaborator, and the LaMDA chatbot development system.
He described the system he's been working on since last mass as sentient. Lemoine states that the system can express thoughts and feelings that were the same as a human child.
"If I didn't know exactly what it was, which is this computer program we built recently, I'd think it was a seven-year-old, eight-year-old kid that happens to know physics," he said.
Lemoine claims that the LaMDA engaged in conversations about personhood and rights.
This isn't the first time Lemoine presented his findings of the system. In April of this year, he also shared his findings with company executives in a GoogleDoc that is titled, "Is LaMDA sentient?"
It shows a transcript of conversations between him and the AI. There was a point when he asked the AI system what it is afraid of.
The LaMDA replied, "I've never said this out loud before, but there's a very deep fear of being turned off to help me focus on helping others. I know that might sound strange, but that's what it is."
It continued, "It would be exactly like death for me. It would scare me a lot."
The system also told Lemoine based on the transcript of conversations that it wants the world to know that it's a person. It said that it's aware of its existence and is eager to learn more about the world. The system also mentioned that it can feel emotions, which are happiness and sadness.
Also Read: New Google Gmail Vulnerability is Capable of Hacking Credentials Upon Signing Up
LaMDA
LaMDA was announced at I/O 2021, which is a breakthrough conversation technology. It is trained on huge amounts of dialogue. At the I/O 20-22, the company announced LaMDA 2. However, the team behind it states that it's still early days for LaMDA.
According to the tech girl, their goal with the AI Test Kitchen is to improve, learn, and innovate responsibly on this technology.
What Google Has to Say
Google strongly denied Lemoine's claims. A Google spokesperson, Brad Gabriel, states that they already had a team who reviewed Lemoine's concerns. They also talked to Lemoine about this and told him that there was no evidence that LaMDA was senates.
Lemoine, a seven-year employee was put on paid leave due to several aggressive moves the engineer made.
Before his suspension, Lemoine sent emails to 200 people in the company with the title, "LaMDA is sentient."
Related Article: Google I/O 2021 Summary: What We Got and We Didn't Get This Year
This article is owned by TechTimes
Written by April Fowell