Remember meForgot password?
    Log in with Twitter

article imageNew technique aims to teach robots exactly what humans want

By Tim Sandle     Jul 9, 2019 in Technology
Technologists are developing processes to ensure robots understand better what humans want. This involves coming up with more accurate and faster ways of providing human guidance to improve the decision making and responses of autonomous robots.
How do robots understand what humans are thinking? This is a challenging area and key to the next phase of further developing machine intelligence.
Imagine that a robot is told to move faster; the instruction is given and the robot proceeds to spin around very quickly. Yet the human instructor wanted the robot to cover a given distance more quickly. What was missing was any instruction to the robot as to the direction it should be taking, and a failure of the robot to anticipate what the human instructor was expecting.
This is a simple and hypothetical example, but it emphasizes the problem. To what degree do humans need to come up with increasingly precise instructions? Or is it more appropriate that robots learn better to anticipate what humans are intending? The latter course of action is occupying the time of researchers at Stanford University.
Robot anticipation
To address such challenges, the researchers combined two different ways of setting goals for robots into a single process. It was found that this new pathway performed far better than either of its parts alone, as measured in simulations and real-world experiments.
This type of two-factor development is important, according to Andy Palan, who co-led the research: “In the future, I fully expect there to be more autonomous systems in the world and they are going to need some concept of what is good and what is bad. It's crucial, if we want to deploy these autonomous systems in the future, that we get that right.”
The learning process involves a mix of people demonstrating to robots what to do and inputting multiple choice question responses into the robot’s program (taken from several people responding to the same question set, in order to provide the robot with the natural variation of human responses). Together, these forms of data help the robot to better anticipate what is expected from a human.
Brainy choices
In related news, a different team of computer scientists have identified a network of brain regions which work together to determine if a particular robot is a worthy social partner for a human. For this study, scientists used functional magnetic resonance imaging to assess brain activity from the prefrontal cortex and amygdala in people. The human subjects were asked to score images of robots based on what they thought of the likability, familiarly, and human-likeness of different robot designs.
At the end of the scoring process, each subject was asked to select the robot which they would like to receive a gift. The outcome was that people prefer more lifelike robots, but eschew robots that appear "too human”. People aren't ready for the life-like android just yet.
In terms of the brain regions connected with selecting the most appropriate robots, this was found to be active within the ventromedial prefrontal cortex (which is associated with the processing of risk and fear). This is one of the first research areas into how people assess artificial social partners. The research is published in The Journal of Neuroscience: “Neural Mechanisms for Accepting and Rejecting Artificial Social Partners in the Uncanny Valley.”
More about Robots, Robotics, Humans, preferences, machine learning
More news from
Latest News
Top News