Super-Intelligent AI Robots Would Be Impossible to Control, Limits Would be Harder to Set!

Artificial Intelligence
Gerd Altmann from Pixabay

Artificial Intelligence (AI) has been a massive factor in human life in recent years, as almost all industries look towards technology and are reliant on the services it brings to the table, making everyday processes trouble-free. However, researchers have warned in the development of "Super-Intelligent" AI which has the potential of disobeying commands and impossibility to control their actions.

Artificial Intelligence
Gerd Altmann from Pixabay

The dangers of Artificial Intelligence is not a rare issue or debate which scientists, researchers, and experts on the field have been taking lightly, and it has been the focus of study for many years during its emergence. The most common danger that has been associated with Artificial Intelligence is "overthrowing" the human race, having a massive possibility as AI continues to evolve and improve.

AIs taking shape in the form of robots have been in development despite its said dangers, but all have been directed or intended to improve human life and alleviate the hard labor and complex work that costs manpower. The recent innovations and improvements to AI is the replacement of humans in intelligence-based works such as computing and decision-making, all originating from the evolution of AI.

Super-Intelligent AI Would Soon Become Impossible to Control

According to the Journal of Artificial Intelligence Research's study (JAIR), "Super-Intelligent AI" pose a serious problem and threat to human existence, as it would soon become impossible or ultimately hard to control. As artificial intelligence begins to think for its own, it would deny the commands and controls despite the parameters or limits set to it.

Artificial Intelligence
Pixabay

Lead researcher Manuel Alfonseca notest that the use of the "Computability Theory" has led them to the conclusion regarding "Super-intelligent AI's" imminent uncontrollable actions in the future. While technology today is nearing the said super intelligence of these machines, the limits of the continuously learning AI would somehow break free of their controls.

Banner for the 2020 GITEX Technology Week Artificial Intelligence Track
GITEX

This happening is uncontrolled or unintended by those who develop artificial intelligence but is an inevitable incident that would befall humans once the disobedience starts. Like humans, AI has the technology and ability to pursue learning at a daily rate, which is also better compared to people as its capabilities do not decay or "grow old," unlike people.

AI Risks: A Theory and Computation Since 1936

Artificial intelligence could be a silver bullet for tax systems
pixabay

While the group's research theory foundation is not directly related to the emergence of AI, Alan Turning's 1936 study called the "halting problem" is one of the group's main basis for the conclusion of the study. The theory says that a computer would continuously search for knowledge and there would be an instance where it would not know that it has already solved it.

The computer's problems would put it on paradox or an endless loop to learning more and pushing forward. Scary as it may sound, but super-intelligent AIs may befall this doom, and people would be the ones to face the consequences of this, not having any "limits" or control to stopping the learning machine.

This article is owned by Tech Times

Written by Isaiah Alonzo

ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion
Real Time Analytics