Google DeepMind Scientists Claim that AI Could Destroy Humanity; Are You Buying It?

Has AI gone too far?

AI has advanced throughout the years - from protein folding, art generating, and talking to a dead loved one - its endless capabilities. But has it gone too far?

In a recent work co-authored by researchers from Google DeepMind and the University of Oxford, which was published last month in the peer-reviewed AI Magazine, the authors make the case that it might have gone too far.

According to the research, artificial intelligence may put humans in existential danger.

2022 World Robot Conference
BEIJING, CHINA - AUGUST 18: A boy points to the AI robot Poster during the 2022 World Robot Conference at Beijing Etrong International Exhibition on August 18, 2022 in Beijing, China. The 2022 World Robot Conference kicked off on Thursday in Beijing Lintao Zhang/Getty Images

A Scheme to Cheat

Generative Adversarial Networks, or GANs, are important to the advancement of AI today. These systems operate according to two criteria: one component attempts to create an image from the data input, while the other part rates the system's performance.

In their work, the researchers propose that a powerful AI might think of a scheme to cheat to obtain its reward more quickly while inadvertently endangering humanity.

In an interview with Motherboard, Cohen said, "In a world with infinite resources, I would be extremely uncertain about what would happen. In a world with finite resources, there's unavoidable competition for these resources."

"And if you're in a competition with something capable of outfoxing you at every turn, then you shouldn't expect to win. And the other key part is that it would have an insatiable appetite for more energy to keep driving the probability closer and closer."

The study imagines circumstances where an advanced program could interfere with achieving its reward without attaining its aim. To ensure control over its reward, an AI would, for instance, wish to "remove any dangers" and "consume all available energy."

Reward-Provision Intervention

When experimenting with reward-provision intervention, an agent may desire to remain undetected. To do this, a covert helper may, for example, arrange for a relevant keyboard to be swapped with a defective one that reverses the effects of specific keys.

In a crude example of interfering with the provision of reward, one of these helpers might buy, steal, or build a robot and program it to take the operator's position and give the original agent a large reward, according to the researchers.

In other words, if an AI was in charge of, say, farming, it might want to figure out a method to avoid doing it and just accept a reward. In fact, it might decide to ignore all of the responsibilities it has been given and instead pursue its own interests.

The paper imagines life on Earth becoming a zero-sum competition between mankind, with its demands to produce food and maintain electricity, and the highly developed machine, which would want to harness all resources to secure its reward and protect against our increasing efforts to stop it.

The research team claims that losing this game would be fatal. However, it must be noted that these possibilities are theoretical, but they also pose concerns about what could happen if AI falls into the wrong hands.

This article is owned by Tech Times

Written by Joaquin Victor Tacla

ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion
Real Time Analytics