Connect with us

Hi, what are you looking for?

Tech & Science

Google developing an artificial intelligence kill switch

It may seem like something more fitting with science fiction, but what would happen if artificial intelligence was taken to the degree where a computer was capable of autonomous thought and independent action? A situation where a machine thinks and decides the best outcomes for humanity?

Such a situation may be a good one, for on one hand a machine could take a dispassionate decision, deciding on the best allocation of resources; on the other hand, allocation of resources requires a value judgement based on political philosophy. Suppose for example resources were allocated on communist lines (an equal share for all); or on liberal economic lines (those who contributed the most receive the greatest amount); or utilitarian lines (with resources given to the majority); or some other basis. Which is best? What would happen if the self-developed reasoning of the machine was at odds with the general populace?

These are matters, at the moment, of speculative fiction (although they are important and trend high on Twitter.) Moreover, it remains unclear what is meant by artificial intelligence in different contexts. According to one treatise it is “a flexible rational agent that perceives its environment and takes actions that maximize its chance of success at an arbitrary goal.”

Today we have a form of machine intelligence, with computers able to perform complex calculations in fractions of a second, and to learn from executed routines. However, others would counter that the abilities and performance of such machines is based on a human written program and the machine can only run with human input. This view point runs that true artificial intelligence requires a computer to independently think and that it can only be measured through the Turing test.

Here, British scientist Alan Turing, in a 1950 paper, proposed a test called “The Imitation Game” to settle the issue of machine intelligence. This is a test of a machine’s ability to exhibit intelligent behavior equivalent to, or indistinguishable from, that of a human (in short, a human would be unable to differentiate an action made by a computer from that of another person.)

Now, leaping forward to the future, suppose a free thinking computer is built and operates freely. What is to prevent such a machine from making decisions that could harm people? Or from developing megalomaniacal ambitions?

Just in case this situation arose, researchers working at Google’s artificial intelligence division, DeepMind, together with Oxford University, are developing a “kill switch” for artificial intelligence.

This is based on a joint paper written by Laurent Orseau (Google) and Stuart Armstrong (Oxford University), where these issues are explored. In the paper the author’s warn artificial intelligent systems are unlikely to “behave optimally all the time”. This means “now and then it may be necessary for a human operator to press the big red button to prevent the agent from continuing a harmful sequence of actions.”

As to how this ‘big red button’ might work is the basis of the current research. Interviewed by BBC Science, Dr. Orseau said that he understood why some people were worried about the future of artificial intelligence: “it is sane to be concerned – but, currently, the state of our knowledge doesn’t require us to be worried.”

However, with a look towards the near future, Laurent Orseau added: “it is important to start working on artificial intelligence safety before any problem arises. Artificial intelligence safety is about making sure learning algorithms work the way we want them to work.”

Here he also warned “no system is ever going to be foolproof – it is matter of making it as good as possible, and this is one of the first steps.”

The Orseau and Armstrong research paper is titled “Safely interruptible agents.” The paper is to be presented at the 32nd Conference on Uncertainty in Artificial Intelligence (to be held in New York front June 25, 2016.)

Avatar photo
Written By

Dr. Tim Sandle is Digital Journal's Editor-at-Large for science news. Tim specializes in science, technology, environmental, business, and health journalism. He is additionally a practising microbiologist; and an author. He is also interested in history, politics and current affairs.

You may also like:

World

The world's biggest economy grew 1.6 percent in the first quarter, the Commerce Department said.

Business

Electric cars from BYD, which topped Tesla as the world's top seller of EVs in last year's fourth quarter, await export at a Chinese...

World

Former US President Donald Trump attends his trial for allegedly covering up hush money payments linked to extramarital affairs - Copyright AFP PATRICIA DE...

Business

A diver in Myanmar works to recover a sunken ship in the Yangon River, plunging down to attach cables to the wreck and using...