The Tesla and SpaceX founder took to Twitter over the weekend to sound the alarm about a possible future when artificial intelligence, or AI, could unleash a catastrophe exceeding the devastation of even a nuclear war.
"Worth reading 'Superintelligence' by Bostrom," Musk tweeted
on Saturday. "We need to be super careful with AI. Potentially more dangerous than nukes."
Musk was referring to the upcoming book by Swedish philosopher Nick Bostrom, which asks what happens when machines pass human beings in intelligence, and whether AI will save or destroy humanity.
"As the fate of gorillas now depends more on humans than on the species itself, so would the fate of humankind depend on the actions of the machine superintelligence," states the Amazon.com summary
for the book, which is scheduled for September hardcover release.
On Sunday, Musk again tweeted
his fears of what might become of humans in a world of superintelligent machines.
"Hope we're not just the biological boot loader for digital superintelligence," he tweeted. "Unfortunately, that is increasingly probable."
Other futurists, such as Google's Ray Kurzweil
, are much more optimistic about the role of AI in future society. Kurzweil believes the singularity, or the moment when AI surpasses human intelligence (Kurzweil predicts this will occur around 2045
), will usher in an era of unprecedented technological advancement, including practical immortality, as humans will be able to effectively upload and store their brain content on computers.
"If you look at video games and how we went from Pong to the virtual reality we have available today, it is highly likely that immortality in essence will be possible," said Kurzweil.
Bostrom is more ambiguous in his assessment
of AI's potential. On one hand, he envisions great benefits:
Superintelligence would be the last invention biological man would ever need to make, since, by definition, it would be much better at inventing than we are. All sorts of theoretically possible technologies could be developed quickly by superintelligence — advanced molecular manufacturing, medical nanotechnology, human enhancement technologies, uploading, weapons of all kinds, lifelike virtual realities, self-replicating space-colonizing robotic probes, and more. It would also be super-effective at creating plans and strategies, working out philosophical problems, persuading and manipulating, and much else beside.
But he also sees potential dangers:
The downside includes existential risk. Humanity's future might one day depend on the initial conditions we create, in particular on whether we successfully design the system (e.g., the seed AI's goal architecture) in such a way as to make it "human friendly" — in the best possible interpretation of that term.
Musk's fears of a Terminator
-like future have been met with ridicule in some circles.
"If Musk really thinks robots might destroy humanity, maybe we need to dismiss his long view thoughts on other technologies," writes
Mashable's Adario Strange.
But it's not as if Musk has often been wrong when making big bets on the future. He made his fortune selling his first software company to Compaq for $300 million in 1999. An early investor in PayPal, he earned greater wealth and renown when the company was sold to eBay a few years later. Musk then founded Tesla Motors, the California-based upmarket electric car company, and SpaceX, the industry leader in commercial space flight.
Musk is also invested in AI companies DeepMind and Vicarious. He claims he's not in it for the money, but rather to "keep an eye on what's going on with artificial intelligence."
"I think there is a potentially dangerous outcome there," he told
CNBC in June.