Gates expressed his worries in a Q&A session on Reddit regarding the possibility that artificial intelligence may grow too strong to control. This argument has been growing in scope and degrees of difficulty as some of the world’s heavyweights debate the issues.
Not everybody agrees. Microsoft’s Eric Horvitz, who recently received the AAAI Feigenbaum Prize for “outstanding advances” in AI research is one of them. Horvitz is Microsoft’s research chief, definitely doesn’t agree. The argument is gathering mass.
In an open letter created by the Future of Life Institute to which Hawking and Musk among a very large number of highly respected people are signatories, a large number of issues of possible risks and the issue of managing artificial intelligence development are raised.
An associated document (PDF, see link on Future of Life link) outlining the issues related to artificial intelligence development includes a very broad range of practical solutions and priorities for managing future development. The overall thrust of this document is to define ways of maximising the social benefits of artificial intelligence.
Try this link if it you can’t find it: futureoflife.org/static/data/documents/research_priorities.pdf
The broad categories of discussion of priorities include:
• Economic impact
• Market disruptions
• Policy for managing adverse effects
• Economic measures
The extract from the text concerning policy for managing adverse effects is indicative of the very broad range of issues:
What are the pros and cons of interventions such as educational reform, apprenticeship programs, labor-demanding infrastructure projects, and changes to minimum wage law, tax structure, and the social safety net?
The economic measures section includes a very appropriate caveat regarding the accuracy and penetration of standard economic measures in relation to artificial intelligence. Conventional economic measures and mean simply do not have parameters for this type of data, even at quantification level.
The history of the arguments regarding artificial intelligence
This argument is a new twist on a long-standing argument. Back in the 1950s, Isaac Asimov first created the Laws of Robotics. Asimov’s laws of robotics are:
• A robot may not injure a human being or, through inaction, allow a human being to come to harm.
• A robot must obey the orders given it by human beings, except where such orders would conflict with the First Law.
• A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
If you’re a code writer, you will see a few issues with trying to put these laws in to practice. The Laws of Robotics are essentially principles. Putting principles into practice is a very different matter from the realities of practice. When writing real code, putting principles into practice is likely to be a very strange, complex experience indeed.
It’s one thing to have a piece of code which tells a machine not to run over your baby. The trouble is you have to explain to the machine what “run over” and “baby” mean, and how to recognise the issues when they actually happen.
Another development which overlapped with Asimov’s Laws of Robotics is the issue of learning machines. The new cognitive computing environment allows computers to essentially reprogram themselves. This is a huge benefit to code writers, helping the researchers solve problems and see solutions. It’s also a major benefit in seeing the mistakes computers are likely to make while reprogramming themselves.
The learning machines theory is, however, inevitable. New situations require a knowledge base. It’s reasonable to assume that artificial intelligence, like human intelligence will need to develop and retain understanding of those situations. Therefore, artificial intelligence, to some extent, needs to be independent at least to that degree.
Not wishing to nitpick on the subject of the meaning of the word “beneficial”, perceptions of intelligent machines being beneficial or otherwise are likely to be extremely variable. What one person sees as good, another may well see as bad.
The risks of invention
Historically, just about every human invention from the wheel and fire onwards has had a downside. Human mismanagement is usually the cause of fundamental problems with new inventions. The present human management environment, for example, is responsible for this wonderful global society we now have.
Artificial intelligence is likely to be the most far reaching of all human inventions. From the now rather dumbed down Internet of Things, to Roombas and actual robots able to hold conversations, artificial intelligence will develop into real intelligence.
For the sake of simplicity, let’s define real intelligence as the ability to acquire, use information and act autonomously. Now consider the simple act of putting a cup of coffee on your desk. Humans know that you put a cup of coffee where you can reach it, know that you shouldn’t knock it over, and have a basic understanding of the law of gravity and its likely effects.
This is one of the simplest acts of human intelligence. Artificial intelligence needs to fully comprehend not only objects and tasks, but their ramifications. Human understanding of ramifications tends to be local/subjective. Immediate ramifications are obvious. Second stage or third stage ramifications are less obvious, but part of the natural thinking process.
To manage these very difficult intellectual options, artificial intelligence will lead to be far more actually intelligent than it is now. A chess computer can think hundreds of moves ahead, in an extremely limited environment of variables. True artificial intelligence would be able to perform in the real world on a similar basis.
Can humanity manage artificial intelligence? If so, how?
The question is – Is this really manageable? Realistically, can artificial intelligence at this level be “managed” by principles? The risks extend far beyond basic situations. Could an artificially intelligent being decide to go around a few principles, like humans?
This isn’t just a problem. It’s a monster of a problem for humanity, incoming. It will grow exponentially, and human thinking is going to have to be at least several steps ahead of potential problems.
Frankly, I don’t see the present social management environment as able to deliver the kind of thinking or the kind of honest intellects required to deal with issues like this. A society which can’t even recognise poverty, housing, education, or health as serious issues is somehow going to deal with a completely new experience?
I don’t wish to denigrate or in any way detract from the basic thrust of the issues raised regarding artificial intelligence. I do, however, want to point out that a largely ineffectual, irresponsible, superficial and expediency based issue management environment is probably the worst place to try to manage artificial intelligence.
I hope it works out better than it looks, but at the moment, it’s not looking good.
Readers – Sorry if you got the somewhat messy first draft. Got carried away, my fault.