Elon Musk is a man not scared of technology. He has revolutionized electric vehicle technology, as well as pushed the boundaries for commercial space exploration. However, there is one piece of technology that does give him pause: artificial intelligence.
Obviously, artificial intelligence is nowhere near Hal 9000 in 2001: A Space Odyssey. But it is slowly becoming a part of our daily lives: from self-driving cars to smart houses to smart phones. However, Musk fears that AI is growing at an exponentially fast rate and that mankind must start to think not only of the conveniences it offers, but also of its implications.
Musk was recently interviewed at the MIT AeroAstro Centennial Symposium. One of the questions he fielded was about artificial intelligence. His answer, which calls artificial intelligence our "biggest existential threat," serves as a warning.
"I'm increasingly inclined to think there should be some regulatory oversight, maybe at the national and international level, just to make sure that we don't do something very foolish," he says. "With artificial intelligence we're summoning the demon."
Musk pointed out that in stories with demons, the guys with the pentagrams and holy water think they can control it, but more often than not, they can't (apparently, he's never seen Supernatural).
The following is the video of the Symposium. Musk's remarks about artificial intelligence occur at 1 hour, 7 minutes and 20 seconds.
This isn't the first time, Musk has warned us about the implications of artificial intelligence. In August, on Twitter, he said that AI was "potentially more dangerous than nukes."
It's surprising that someone pushing for advances in technology finds the idea of artificial intelligence so frightening. However, Musk is not alone in the scientific community with his sentiment. Even Stephen Hawking, arguably one of the greatest minds of our time, spoke about concerns with advancements in artificial intelligence.
"One can imagine such technology outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand," Hawking writes. "Whereas the short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all."