If no new global rules are implemented regarding the technology, artificial intelligence weapons will soon reach city streets, campaigners have warned.
Calls to Regulate Legal Autonomous Weapons
Many have expressed their concern regarding the use of artificial intelligence on weapons programmed to kill targets with no human intervention.
Hopefully, an upcoming U.N. conference may ease the concerns. Delegates will discuss banning weapons that target people without "meaningful human control."
However, activists are pessimistic about the outcome. Although some countries rejected the rise of 'slaughterbots' or 'killer bots' like New Zealand, several major military powers don't appear enthusiastic on calls to regulate autonomous legal weapons.
Among the countries that have resisted efforts to regulate lethal autonomous weapons (LAWs) is the United States. Instead, the U.S. has suggested establishing a "non-binding code of conduct." While it may set out some principles, there would be no legal obligation to follow them.
A global treaty has also been rejected by China and Russia. The prospect of meaningful restrictions is dim since a global treaty would require unanimous agreement at the U.N. meeting. Ultimately, the resistance to strong new restrictions will prove self-destructive.
MIT's A.I. researcher and co-founder of The Future of Life Institute (FLI), Max Tegmark, also aired his concern to The Next Web, saying that armed militarily dominant nations will be the biggest losers because killer robots are cheaper.
"They'll be small, cheap and light like smartphones, and incredibly versatile and powerful. It's clearly not in the national security interest of these countries to legalize super-powerful weapons of mass destruction," Tegmark remarked.
Moreover, FLI visualizes how these unregulated killing machines will rule the future. They depict mass executions at polling stations, killer robots used in heists, and drones with facial recognition that target people.
The dystopian vision Tegmark describes may one day come true. LAWS may have already killed soldiers without a human operator's permission. In the future, Tegmark anticipates that most of the weapons will be used by and on civilians:
"If you can buy slaughterbots for the same price as an AK-47, that's much preferable for drug cartels, because you're not going to get caught anymore when you kill someone," Tegmark added.
It's unlikely an international ban will emerge this week, but Tegmark believes individual nations may gradually enact new rules. In time, he hopes LAWs will be so stigmatized that each military power is compelled to ban them. They may be able to develop A.I. weapons of mass destruction if they don't.
What are Lethal Autonomous Weapons (LAW)?
Described as "lethal autonomous weapons systems," or "killer robots," "slaughterbots," are weapons systems that use artificial intelligence (A.I.) to identify, select, and kill human targets without any human assistance and intervention.
As part of an attack system, "slaughterbots" are pre-programmed to kill specific "target profiles." The weapon is then deployed into an environment where artificial intelligence searches for those profiles using sensor data, such as facial recognition.
When the weapon encounters someone who matches the algorithm's target profile, it will fire and kill them.
Related Article : Super-Intelligent AI is Hard to Control, According to Researchers | Here's Why it's Beyond Human Comprehension
This article is owned by Tech Times
Written by Thea Felicity