Google Assistant Learned How To Fire A Gun: Should You Be Scared?

An artist taught Google Assistant how to fire gun, creating an art piece that may further increase concerns on whether artificial intelligence is dangerous.

While it may still be very far from the human-killing cyborgs from science fiction movies, Google Assistant firing a gun upon a voice command may already be a very scary thought for opponents of artificial intelligence.

Google Assistant Fires Gun On Voice Command

Alexander Reben, in a video that he uploaded to YouTube, showed his latest work. "OK Google, activate gun," Reben said, prompting a response of "Sure, turning on the gun" from Google Assistant. Following the voice command issued by Reben, the gun fires at the apple in front of it.

The contraption uses a Google Home smart speaker, a pellet gun, a TP-Link smart outlet, and a solenoid attached to the gun's trigger. It may be assumed that Reben programmed Google Assistant to activate the smart outlet upon a specific voice command, which then sends electricity to the solenoid that pulls the trigger.

According to Reben, while he used the Google Assistant-powered Google Home for the piece, it could be any other digital assistant and smart speakers, such as Alexa with the Amazon Echo and Siri with the Apple HomePod.

Is Artificial Intelligence Good Or Bad?

Artificial intelligence has infiltrated nearly every part of our lives. AI powers digital assistants, smart home devices, vehicle functions, and many other aspects of our daily routine.

While artificial intelligence has certainly proven that it can provide humans with certain conveniences, there remain many questions on the dangerous scenarios of an ever-learning AI.

Two of the most outspoken critics of artificial intelligence are Tesla and SpaceX CEO Elon Musk and the late Stephen Hawking. Musk previously urged the government to regulate AI development before it becomes too late, then later added that he thinks the technology will spark World War 3. Hawking, meanwhile, warned that artificial intelligence may lead to a robot apocalypse, and that AI will replace humans.

The art piece by Reben raises a lot of questions on artificial intelligence, especially on the ethics surrounding it. For example, if the gun that Google Assistant fired killed a person, who is at fault? Is it Google Home and Google Assistant, which carried out their programmed functions? Or was it the person who issued the voice command?

In addition, Reben's piece provokes images of future AI-powered security systems that may carry their own weapons. Can humans really trust artificial intelligence to do its job and not commit any fatal errors?

ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion
Real Time Analytics