Russia Trains Robot To Shoot Guns: Can Humans Prevent Rise Of Terminator-Like Killing Machines?

In photos and a short video clip shared on social media, Russia's Deputy Prime Minister Dmitry Rogozin showed off the new skill of the humanoid robot Fedor: shoot guns using both of its arms.

Russia's Intelligent Humanoid Robot

Fedor (Final Experimental Demonstration Object Research) is a robot designed for space missions. It is set for launch to the International Space Station by 2021 to do tasks that are considered too dangerous for astronauts to do in space.

Rogozin assured that Russia is not creating a Terminator-like killing machine and explained that the training helps hone artificial intelligence.

"Combat robotics is key to making intelligent machines," Rogozin said. "This is applicable to areas including aviation and space."

Fedor was also trained to do a range of other tasks such as screwing a light bulb, operating a drill and driving a car but its new ability to shoot guns raised concerns about killer robots.

Fears Over AI Taking Over The World

James Cameron's Terminator showed a fictional world where highly intelligent robots set off to eliminate the human race but there are reasons that the likes of Stephen Hawking, Elon Musk and Bill Gates expressed concerns about artificial intelligence possibly taking over humanity.

"I am in the camp that is concerned about super intelligence. First the machines will do a lot of jobs for us and not be super-intelligent. That should be positive if we manage it well," Gates wrote during a Reddit Ask Me Anything interview.

"A few decades after that though the intelligence is strong enough to be a concern. I agree with Elon Musk and some others on this and don't understand why some people are not concerned."

Mankind's Efforts To Prevent Rise Of Killer Robots

Amid fears over the potential rise of robots that can threaten human race, some individuals, companies and groups find ways to stop rogue AI from taking over the world.

Developers from Google's artificial intelligence division DeepMind and researchers from Oxford University, for instance, have teamed up to develop a kill switch for artificial intelligence.

Experts say that a "big red button" could be necessary so humans can stop AI from doing dangerous things. They also think that AI could be coded so that they would not learn to ignore instructions from humans.

Researchers who work on robotics and AI have likewise issued a call for a preemptive ban on autonomous weapons, which can choose and engage targets without human intervention. Examples of these lethal machines are armed quadcopters that are capable of hunting and eliminating enemy combatants.

The United Nations has also started to take action as 123 nations voted to initiate official discussions on the dangers of killer robots at the Convention on Certain Conventional Weapons in Geneva last year.

"In essence, they decided to move from the talk shop phase to the action phase, where they are expected to produce a concrete outcome," said Campaign to Stop Killer Robots co-founder Stephen Goose.

Financially-able individuals also do their share. Billionaire Musk, who has openly expressed his fears over the potentials of AI, has donated money to conduct research on artificial intelligence.

ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion
Real Time Analytics