Email
Password
Remember meForgot password?
    Log in with Twitter

article imageWho's To Blame if an Autonomous Machine Kills?

By Jerry Kennard     Aug 21, 2009 in Technology
Within a decade we could be interacting with machines that have the capability to learn and make their own decisions. But can a machine be held responsible for its actions if something goes badly wrong?
According to a recent report from the Royal Academy of Engineering, society needs to consider the social, legal and ethical issues involved in the use of autonomous systems.
We're becoming ever more familiar with trains that have no drivers, auto-pilot systems on aircraft, and smart houses that can control lighting and temperature. We have passed many dull, repetitive and dangerous jobs to robots who will never get bored, never get angry and will do exactly what they are programmed to do, but no more.
Perhaps less familiar is the notion of a machine that can learn, develop its skills and solve problems as it goes along. These sound more like human qualities and with this level of complexity comes the increased prospect of error. It's a complex issue. A report in the Guardian points to some of the huge benefits of such technology. Imagine cars that can reduce congestion and the number of deaths on the road. The absolute precision of surgical techniques that could never be accomplished by a human and which could be repeated as often as necessary.
The speed of technologial advance makes all these and other things a real possibility. This raises a question about what happens if something goes badly wrong. According to Chris Elliott, a systems engineer, barrister and visiting professor at Imperial College London the concept of a machine being responsible in any way is a problem for the legal profession. If you take an autonomous system and one day it does something wrong and it kills somebody, who is responsible? Is it the guy who designed it? What's actually out in the field isn't what he designed because it has learned throughout its life. Is it the person who trained it?
Then there is the issue of choice. What if your loved one has the choice of being operated on by a highly advanced machine rather than a human? If they die on the operating table do you then blame the machine? Elliott believes we need to address such questions now if we are to address the ethical and legal issues involved in living with autonomous machines.
More about Automaton, Killer robots, Autonomous machine
 
Latest News
Top News