Email
Password
Remember meForgot password?
    Log in with Twitter

article imageOp-Ed: Many are pessimistic about the consequences of AI

By Ken Hanly     Nov 26, 2017 in Technology
Charles Modeas, EU Research Commissioner claims that the development of AI and thinking machines pose a threat to our existence risk confusing science with science fiction.
Negative views of AI development predominate
In a speech to the European Parliament's Science and Technology Options Assessment Group Modeas said:“If you do any research on artificial intelligence these days, the results are astonishingly pessimistic. Nine articles out of ten on AI are negative. Not just negative. Alarmist and panicked, sometimes even hysterical. For me, a techno-optimist, it's shocking. And very disappointing."
Commissioner Modeas is betting that AI research will be a positive force even though he admits that public fear of the technology appears to be deep. He thinks the public fears what is the most exciting new technology for our generation. To deny the amazing benefits it can bring is not the answer he claims.
Possible negative effects of AI development
Warnings about AI development have come from notables such as Bill Gates, Stephen Hawking, and Elon Musk who argue that it could evolve to a point where it is beyond human control. Many critics also worry that robots with AI will take over jobs creating unemployment.
Elon Musk's worries
Musk who himself has been a prime contributor to technology both in electric vehicles and space rockets worries that competition for AI technology could lead to war as governments compete for superiority in weaponry using AI. However, it has surely always been the case that countries compete for technology especially technology that can be triumphant in warfare. This would be so whether AI developed or not. However, the development of AI will make this competition more dangerous..
Technology in the form of chemical weapons and nuclear bombs already show that it is imperative that we do everything we can to ensure that new AI technology is controlled. Musk's warning are very much based on reality.
Will AI technology result in lost jobs?
Another of Musk's worries was the loss of jobs.
Economists Daron Acemoglu and Pascual Restrepo of the National Bureau of Economic Research looked at the historical effects of robots on employment in the US between 1990 and 2007 controlling for the influence of other factors.
Their study showed that each new robot led to the loss of between 3 and 5.6 jobs in the local area. For each new robot added for 1,000 workers wages would also decline between 0.25 and 0.5 percent.
The two researchers write: “Predictably, the major categories experiencing substantial declines are routine manual occupations, blue-collar workers, operators and assembly workers, and machinists and transport workers.”
Steven Mnuchin, the US treasury secretary said that he was not worried about the effects of AI and automation on employment. Mnuchin said: “Quite frankly, I'm optimistic. I mean, that's what creates productivity." The productivity is increased because more is produced by each worker in conjunction with AI. However, the remaining workers will actually see their wages decline. The AI makes the firm more productive and usually more profitable as well.
As more and more AI is used in production and elsewhere its negative effects on employment will likely be greater.
Hawking's fears
Stephen Hawking is a famous theoretical physicist and cosmologist. He suffers from a slow progressing but early onset type of ALS sometimes called Lou Gehrig's disease. Wikipedia notes: "Hawking has a rare early-onset, slow-progressing form of amyotrophic lateral sclerosis (ALS) that has gradually paralysed him over the decades. He now communicates using a single cheek muscle attached to a speech-generating device."
Hawking is not averse to advanced technology. He uses such technology to communicate and could not talk without it.
Hawking's warning is dramatic. He has even said that the development of full AI could result in the end of the human race.
The new AI used by robots is based on biology as a model rather than mathematics. New robots learn from their experiences just as human beings do. The older model of robots simply programmed them to do specific tasks and their actions were always predictable if properly programmed. With the new robots that learn from experiences they are less predictable.
Hawkins warns that robots are being developed that can match or surpass human abilities and some could take off on their own and redesign themselves quickly. The computer HAL in Kubrick's film 2001 perhaps provides a good science fiction version of what Hawking sees as possible.
Humans Hawking feels are limited by slow biological evolution whereas AI robots could develop much more quickly. Humans would not be able to compete with them. It is not clear how the robots would keep humans from shutting them down. Even HAL in Kubrick's film eventually gets shut down.
Moedas' worries
Moedas does not take these concerns to be pressing issues. As AI develops there will be plenty of time to address these issues he argues.
What Moedas worries about is "fake news". He notes a recent story of two AI programmes at a Facebook research center that began talking to each other in their own language and as a result the researchers shut them down.
Moedas notes that people were saying that this was the beginning of the end that robots would soon take over the human race. But Moedas said that the shutting down of the programmes had little to do with fear of unleashing an uncontrollable force but with not fulfilling requirements.
As many news reports misrepresented the facts he calls it fake news. While some of the reasons for the facts are misrepresented the two programmes did develop their own language, one which was a type of shorthand code that the programmes used in trading. as described in this Telegraph article.
It would take some effort no doubt to interpret the shorthand used by the programs. The shortcoming of the research was that they had not thought of rewarding the programme traders for using English. They actually thought that the programme aims had already been fulfilled. Given that the programmes were no longer using English they were shut down.
A Snopes' article discusses the issue in detail including emails with the head researcher. They did later research that rewarded the programmes for using English.
The lead author Michael Lewis said that he was not worried that the programmes developed their own language saying: “While it is often the case that modern AI systems solve problems in ways that are hard for people to interpret, they are always trying to achieve the goals that were given to them by people.”
Controlling the flow of information
Moedas notes that if complex scientific debates are distorted in the media the result can be immeasurable effects on our lives and families for years to come.
Moedas' concern is being reflected in upcoming studies by the EU. The EU is considering what steps it can take to stem the tide of fake news on all topics.
There are huge debates about issues even within the scientific community. When one considers all topics disagreement is even more prevalent. What is fake news for Donald Trump is probably not fake to many mainstream media analysts. Where is there an objective definition of fake news?
Frans Timmermans, Commission's vice-president said a week ago "“The flow of information and misinformation has become almost overwhelming. That is why we need to give our citizens the tools to identify fake news, improve trust online, and manage the information they receive.”
The Commission would like to control the flow of information as well as evaluating and censuring some. It wants to set up an expert panel of academics to assess who should play what role in tackling misinformation that is flowing through platforms such as Google, Facebook, You Tube, Instagram, and Twitter. No doubt AI programmes using biased algorithms will be established to filter content and feed in positive sponsored content.
Already there is a new law in Germany that is targeting social media platforms such as Facebook and Twitter. The law requires companies to remove malicious content such as hate speech within 24 hours or face fines up to 30 million Euros. Who decides what is malicious content?
The ideal no doubt would be for citizens to only read officially approved news or alternatively have citizens trained only to trust officially sanctioned news. Only official fake news will be effective.
This opinion article was written by an independent writer. The opinions and views expressed herein are those of the author and are not necessarily intended to reflect those of DigitalJournal.com
More about Charles Modeas, Robotics, Artificial intelligence