The artificial intelligence of this modern world has completely overtaken our lives over the past decades. People incorporate AI in daily tasks and what's concerning is how humans become heavily dependent on its application.
One of the most talked-about possibilities about machine-learning systems is how they could affect humanity in the long run. For that, scientists have an explanation of why a super-intelligent AI can be controlled or not.
Here's why it's difficult to contain according to the latest research.
High-Level Computer AI is Hard to Control
According to a report by Science Alert on Friday, November 5, the AI research conducted by some researchers revealed the fact behind its power. Of course, we know that artificial intelligence is made by people, but that's just the tip of the iceberg.
The authors of the study said that humans can no longer set some limitations once the computer system exceeds the coverage of the programmers. In addition, there's a "fundamentally different problem" that it contains which could bypass the "robot ethics."
As such, there will come a time when superintelligence will outperform humans, thus becoming more uncontrollable. Indeed, the researchers highlighted that such an event would be incomprehensible to the people.
As a quick throwback to 1936, Alan Turing established proof through his mathematical calculation about his study on AI. The team notes that it's "logically impossible" to seek a method that will help humans identify every written program.
In short, super-intelligent AI can grasp a computer program through its memory.
What the Experts Said About Super-Intelligent AI
According to Max-Planck Institute for Human Development computer scientist Iyad Rahwan, the containment algorithm would be deemed unusable for such matter.
Furthermore, the AI researchers said that teaching ethics to artificial intelligence would limit its capability. For instance, the solution would be possible if it would be stripped away from a particular network.
The argument remains the same: why do humans create super-intelligent AI if they're not going to utilize it to address global solutions.
For Manual Cebrian, another expert in computer science, the emergence of super-intelligence resembles a fictional story. Moreover, some programmers do not understand how an AI machine performs tasks without instructions.
"The question, therefore, arises whether this could at some point become uncontrollable and dangerous for humanity," Cebrian said.
To view the study entitled "Superintelligence Cannot be Contained: Lessons from Computability Theory," visit JAIR.
Is an Orwellian Future Possible with AI?
Back in June, Tech Times reported that Microsoft President Brad Smith spoke about the interference of AI to the people. According to him, artificial intelligence could pose some dangers in the long run.
He said that while AIs are considered as advanced discoveries, machine-learning robots could outdo humans when it comes to tasks. In fact, George Orwell's "1984" book could indeed happen in the future.
In the book, the government faced an extreme challenge in controlling AI over public surveillance. Smith said that it could possibly take place by 2024 if the people do not pay attention to it.
Related Article: Google CEO Sundar Pichai Cautions the Dangers of Open Web; AI, Quantum Computing to be Highlight for the Next Few Years
This article is owned by Tech Times
Written by Joseph Henry