Email
Password
Remember meForgot password?
    Log in with Twitter

article imageHow A.I. could increase the risk of nuclear war by 2040

By Karen Graham     Apr 25, 2018 in Technology
Artificial intelligence (AI) could potentially result in a nuclear war by 2040, according to a research paper by a U.S. think tank, Rand Corporation.
For decades, an uneasy peace has been maintained around the globe based on the notion that any nuclear attack could trigger mutually assured destruction.
However, the potential for artificial intelligence (AI) and machine-learning to decide military actions means that any assurances of global stability could easily break down, Rand researchers warn.
According to Science Daily, the paper points out that while it is unlikely that AI-controlled doomsday machines will ever be developed, there are hazards in using AI for nuclear security. because of its potential to "encourage humans to take potentially apocalyptic risks."
"Some experts fear that an increased reliance on artificial intelligence can lead to new types of catastrophic mistakes," said Andrew Lohn, co-author of the paper and associate engineer at RAND.
Arnold Schwarzenegger's "Terminator" movies popularised the idea that AI and killer r...
Arnold Schwarzenegger's "Terminator" movies popularised the idea that AI and killer robots could lead to the end of humans
Robert Mora, Getty/AFP/File
It wouldn't take much to start a nuclear war
In a brief overview of the paper, Rand cites an event that occurred on September 26, 1983. Lt. Col. Stanislav Petrov was sitting in the commander's chair in a secret bunker outside Moscow, monitoring a bank of computers, watching for any missile launch from the United States.
Suddenly, an alarm went off, shattering the quiet of the room. A single word flashed on the screen in front of him - LAUNCH.
Petrov would say later that his chair felt like a frying pan. He knew the computer system had glitches. The Soviets, worried that they were falling behind in the arms race with the United States, had rushed the computer into service only months earlier. Its screen now read “high reliability,” but Petrov's gut said otherwise.
In a simulation of a Peacekeeper Missile Launch  two Launch Keys similar to the one pictured here  a...
In a simulation of a Peacekeeper Missile Launch, two Launch Keys similar to the one pictured here, are inserted into the Launch Control Panel by both Crew Commander and Deputy Crew Commander which must be turned simultaneously to activate the launch. DoD photo by: STAFF SGT. SCOTT WAGERS
Defense Visual Information Home Page (USDOD)
But the computer program continued - stating missiles had been launched from the U.S. and five were headed toward the USSR. Yet, all the while, technicians there were telling Petrov they could not find the missiles on their radar screens or telescopes.
To make a long and terrifying story short, nuclear war was averted due to Petrov's cool head and nerves of steel. Petrov had literally stood at the precipice of nuclear war, but he didn't send the Launch order up the line of command. And it is really good he didn't. The computer had misread sunlight glinting off cloud tops.
"The connection between nuclear war and artificial intelligence is not new; in fact, the two have an intertwined history," said Edward Geist, co-author of the paper and associate policy researcher at RAND.
"Much of the early development of AI was done in support of military efforts or with military objectives in mind." And that is one reason why many business leaders and other experts are warning against the use of AI in a military setting.
The "Campaign to Stop Killer Robots" was launched in London in 2013
The "Campaign to Stop Killer Robots" was launched in London in 2013
Carl Court, AFP
More reliance on Artificial Intelligence is inevitable
Many researchers, in defense of AI, say that with future improvements in the technology, it will be possible to develop systems that will be less error-prone than humans - making AI systems stabilizing in the long-run. But what about the time in between now and when AI has come to maturity?
"Some experts fear that an increased reliance on artificial intelligence can lead to new types of catastrophic mistakes," said Lohn. "There may be pressure to use AI before it is technologically mature, or it may be susceptible to adversarial subversion.
"Therefore, maintaining strategic stability in coming decades may prove extremely difficult and all nuclear powers must participate in the cultivation of institutions to help limit nuclear risk."
RAND researchers based their perspective on information collected during a series of workshops with experts in nuclear issues, government branches, AI research, AI policy and national security. "Will Artificial Intelligence Increase the Risk of Nuclear War?" is available at https://www.rand.org/pubs/perspectives/PE296.html.
More about Rand Corporation, Artificial intelligence, mutual assured destruction, Nuclear Deterrence, Technology