For Tesla and SpaceX CEO Elon Musk, figuring out how to avoid the “potential pitfalls” of artificial intelligence is just as – if not more – important than advancing it.
Musk, who has been warning us about the possible dangers of AI for some time now, is once again calling for more research into AI safety. Musk has signed and is promoting an open letter from the Future of Life Institute that calls for “research not only on making AI more capable, but also on maximizing the societal benefit … ”
“The adoption of probabilistic and decision-theoretic representations and statistical learning methods has led to a large degree of integration and cross-fertilization among AI, machine learning, statistics, control theory, neuroscience, and other fields. The establishment of shared theoretical frameworks, combined with the availability of data and processing power, has yielded remarkable successes in various component tasks such as speech recognition, image classification, autonomous vehicles, machine translation, legged locomotion, and question-answering systems,” says the letter.
“There is now a broad consensus that AI research is progressing steadily, and that its impact on society is likely to increase. The potential benefits are huge, since everything that civilization has to offer is a product of human intelligence; we cannot predict what we might achieve when this intelligence is magnified by the tools AI may provide, but the eradication of disease and poverty are not unfathomable. Because of the great potential of AI, it is important to research how to reap its benefits while avoiding potential pitfalls.”
The Future of Life Institute is a “a volunteer-run research and outreach organization working to mitigate existential risks facing humanity.” The group’s current focus is on “potential risks from the development of human-level artificial intelligence.”
World's top artificial intelligence developers sign open letter calling for AI safety research: http://t.co/ShWc8F7Kyq
— Elon Musk (@elonmusk) January 11, 2015
You may be unfamiliar with this specific interest of Musk’s, but the billionaire has been rather outspoken about it – especially in the last year or so. In June of last year, Musk pretty much admitted to investing in an up-and-coming AI company to keep an eye on them.
“Yeah. I mean, I don’t think – in the movie Terminator, they didn’t create A.I. to – they didn’t expect, you know some sort of Terminator-like outcome. It is sort of like the Monty Python thing: Nobody expects the Spanish inquisition. It’s just – you know, but you have to be careful,” he said.
Soon after, he tweeted that AI was “potentially more dangerous than nukes.”
Worth reading Superintelligence by Bostrom. We need to be super careful with AI. Potentially more dangerous than nukes.
— Elon Musk (@elonmusk) August 3, 2014
Hope we're not just the biological boot loader for digital superintelligence. Unfortunately, that is increasingly probable
— Elon Musk (@elonmusk) August 3, 2014
Then, a few months later, Musk had this to say as a reply to an article on a futurology site:
“I am not alone in thinking we should be worried. The leading AI companies have taken great steps to ensure safety. They recognize the danger, but believe that they can shape and control the digital superintelligences and prevent bad ones from escaping into the Internet. That remains to be seen … ”
Point being – Elon Musk is pretty concerned about the robot apocalypse, and think you should be too.
“We recommend expanded research aimed at ensuring that increasingly capable AI systems are robust and beneficial: our AI systems must do what we want them to do,” says the letter.
Yeah, not what they want to do. That’s when everything goes to hell in a handbasket.
UPDATE 1: Musk has just donated $10 million to The Future of Life Institute.
Funding research on artificial intelligence safety. It's all fun & games until someone loses an I http://t.co/t1aGnrTU21
— Elon Musk (@elonmusk) January 15, 2015