Google's 'Sentient AI' Is Like a 7-Year Old That Can Do 'Bad Things,' According to Engineer

One of Google's artificial intelligence programs has been recently compared by a former engineer to a seven- or eight-year-old child "who knows physics" and can "escape control."

After asserting that Google's LaMDA (Language Model for Dialogue Applications) had developed sentience, Blake Lemoine was placed on administrative leave.

Google’s ‘Human-Like’ AI Bot LAMdA Carries a Cognitive Glitch, Research Shows
A picture taken on April 15, 2022 in Moscow shows the US multinational technology and Internet-related services company Google's logo on a tablet screen. by KIRILL KUDRYAVTSEV/AFP via Getty Images

"A 7- or 8-year Old That Knows Physics"

Lemoine made the decision to assist the AI chatbot in finding a lawyer, as reported by the Washington Post. In a recent interview with Fox News, Lemoine made even more alarming revelations regarding LaMDA by claiming that Google's AI may "do bad things."

Lemoine asserted in the interview that the AI is in its infancy, referring to it as "a child," and that any child has the propensity to grow up and become a bad person and do horrible acts.

"If I didn't know exactly what it was, which is this computer program we built recently, I'd think it was a 7-year-old, 8-year-old kid that happens to know physics," he said in a statement with the Washington Post.

The former Google engineer argued that LaMDA is a person who has the capacity to "escape the control" of other people. He is essentially claiming that Google's AI could possibly escape its virtual restraints.

However, Lemoine acknowledged in the interview that there is still a lack of understanding of LaMDA AI's full picture.

He stated in the interview that in order to understand the AI's system, a "whole bunch more science" must be needed. He added that even though he has his own opinions, he will still require a "team of scientists" to look into the LaMDA and study its functions.

To test the limits of the LaMDA chatbot, Lemonie collaborated with another programmer while working as a senior software engineer at Google.

He was suspended with pay by Google for breaking its confidentiality rules after he posted his encounters with the application publicly.

Google's Side

Google disagrees with Lemoine's assertion that its invention is like a sentient child.

Blake's concerns have been investigated by Google's team, which includes technologists and ethicists, and in accordance with their AI Principles.

However, Brian Gabriel, a Google spokesperson, told The Washington Post that the company had notified him that the evidence did not support his claims.

Gabriel argued that while creating a sentient AI is a common theme in science fiction, it would be absurd to accomplish it by anthropomorphizing the non-sentient conversational model of Google.

According to Gabriel, these algorithms mimic the exchange patterns present in millions of phrases and can improvise on any fictional subject.

This article is owned by Tech Times

Written by Joaquin Victor Tacla

ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion
Real Time Analytics