Email
Password
Remember meForgot password?
    Log in with Twitter

article imageArtificial intelligence seen with negativity above all

By Bill K. Anderson     Mar 22, 2015 in Business
Artificial Intelligence is being looked at with the same worried eyes that many had on nuclear power. What will it take to overcome the people's disbelief?
Stuart Russell, winner of the 2012 Blaise Pascal Chair, sees artificial intelligence as having the same potential for destruction as nuclear power which, decades later, many populations still squirm at the most subtle of mentioning. As the nuclear physicists and geneticists before them, researchers in artificial intelligence have to be prepared for the possibility of the idea that their research may lead and ensure that the results are beneficial to the human species. Many of these ideas were covered in a by-phone explanation.
The initiative of the Future of Life Institute has encountered an unexpected echo. They now have more than 5,000 signatories, including most of the leaders in the field. It's a sign that a culture of accountability is being developed in the artificial intelligence community.
So the debate at the moment in Silicon Valley is if we should be afraid of artificial intelligence. Every big name technology company is lined up to have their say heard in debates and conferences on the topic.
British physicist Stephen Hawking was first to say last year that "the development of a complete artificial intelligence could put an end to mankind." The founder of Tesla and SpaceX, Elon Musk, has deemed "potentially more dangerous than the atomic bombs." Bill Gates, the end of January, said also that there were grounds to be worried.
On March 4, Peter Thiel, founder of PayPal, gave his opinion in turn. He does not think that robots will really destroy humanity. "Their fears are a bit exaggerated," he said. At the same time, Sam Altman, president, thinks they're superhuman machines and advocates a state regulation. For him, governments must start thinking about how to slow down the bad guys and support the good. "What happens with the first to be developed of this AI is very important," he estimated. Recently, the Chinese Google, Baidu, has requested the assistance of the Chinese military to develop its artificial intelligence capabilities, information that does not go unnoticed in Silicon Valley.
The latest intervention was that of Eric Schmidt, chairman of Google. During a discussion at SxSW festival in Austin, Monday, March 16, the search engine megaphone wanted to be reassuring. For him, artificial intelligence will become one of the greatest forces for good in the history of mankind, simply because it makes people smarter.
Eric Schmidt gave the example of voice recognition and machine translation. He believes that the use of the machine to process huge volumes of data will solve all the problems. "I do not see a field of research, be it English, science, or business that cannot become much more efficient, much more powerful or more intelligent."
Common to all these players is that they are not experts in the field of artificial intelligence. They are, however, the instigators of the manifesto published on January 12 by the Future of Life Institute, an association founded by Estonian IT Jaan Tallinn, co-founder of Skype, and MIT professor Max Tegmark.
With the help of Elon Musk (who put $ 10 million on the table to fund the program), the Institute organized a seminar in early January in Puerto Rico, which released this open letter: "Priority Research for robust and beneficial artificial intelligence."
The text has not had the same media coverage as certain celebrity soundbites, and it is not quite so common that the greatest specialists in a sector - academics and engineers - warn their contemporaries on its destructive potential. With having a potential "comparable to the nuclear" according to the Berkeley professor Stuart Russell, director of the Center for Intelligent Systems and co-author of the manual authoritative in the field.
Asked a few weeks ago, Max Tegmark explained the process: so far, he notes that scientists are working to develop the machines, but given the huge investments and competition that exists between the technology giants, almost everyone now has their laboratory focusing on artificial intelligence. And that's not to mention the engineer count on the Pacific Rim only beginning reflecting the numbers that are needed. "This is a race between the growing potential of artificial intelligence and the wisdom to manage it," says the physicist. "All investments are devoted to trying to increase the capacity of machines and virtually nothing is invested on the side of wisdom."
The researchers themselves are surprised to accelerate their achievement of the capacity of machines. "There are many areas in which it was assumed they would not succeed in our lifetime. Now people are advising to be careful, perhaps we will succeed." summarizes Max Tegmark.
The text of January 12 called for balanced. It noted the progress made thanks to machines and believes that the "eradication of disease and poverty is not inconceivable." But he considers it to be as important to "avoid the potential pitfalls" of developed machines.
The text identifies areas in which researchers should take special care in the work to ensure that the machines do exactly what they are intended to do. They are to improve the lot of humanity, not alienate it.
In the short term, scientists are concerned about the economic impact of future automation of most tasks performed by humans. They think that sociologists and politicians must now address the issue of sharing wealth in societies without work.
The signatories are concerned about the impact of artificial intelligence on weapons. Are they allowed the so-called autonomous weapons to which they may be authorized in accordance with international humanitarian law? If so, how do they avoid triggering accidental wars?
The text brings up many questions concerning the safety guarantees of passengers in the certain autonomous cars being able to cut annual fatalities from accidents in half. Does the blame go to the automotive industry for having created the car in the remaining 20,000 accidents, or are they going to be praised for the 20,000 accidents prevented. Another question focuses on the ethical decision making process of the machines and how autonomous vehicles arbitrate when they have to decide between a human encounter and self-damage in avoidance, causing serious damage on both fronts. Also, how lawyers and policy makers will address such topics and if there will be a national debate and criteria approved by the population.
In the long term, the open letter addresses the issue of "intelligence explosion" (another formula for when the machine will surpass humans in cognitive tasks), an emotional issue for Silicon Valley and science fiction fans.
The researchers do not know if - or when - the phenomenon will occur but believe that we must prepare to ensure that the impact is "beneficial" to humanity. "We don't need evil robots like in the movies," says Stuart Russell. "It would be enough of a mismatch between the job we assign to the machine and that we want it to fill in reality."
More about Artificial intelligence, max tegmark, stuart russel
 
Latest News
Top News