Remember meForgot password?
    Log in with Twitter

article imageMachines appear more persuasive when pretending to be human

By Tim Sandle     Nov 28, 2019 in Technology
A new study shows when robots disclose their non-human nature, their efficiency is compromised. Conversely, when robots act more ‘human’ then interaction with people is considerably improved.
With the research, scientists at Max-Planck-Gesellschaft and New York University set out sought to assess if cooperation between humans and machines alters when the machine purports to be human. By running an experiment where humans interacted with bots, the scientists found that bots are better than humans in certain human-machine interactions. However, this only happened when robots were allowed to hide their non-human identity.
The research involved 700 volunteer subjects to take part in an online cooperation game (“prisoner's dilemma”), where they needed to interact with a human or an artificial partner. For the game, the players can either act egotistically to exploit the other player, or act cooperatively creating advantages for both sides.
The main aspect of the experiment was where researchers gave selected subjects false information in relation to their gaming partner's identity. Some subjects were told the player they were interacting with was a bot; other players were told they were up against another human.
The variations as to who the opponent was enabled the scientists to assess if humans are prejudiced against gaming partners based on a human or robot identity.
The research revealed that machines impersonating humans were more successful in convincing their gaming partners to cooperate, provided the person playing thought the robot was human. Where the human player became aware they were up against a machine, the cooperation rates fell away.
In practice this might suggest that a help-desk run by robots could be more efficient than desks run by people, but only if the identity of the robots was unknown.
According to lead researcher, Dr. Talal Rahwan: “Although there is broad consensus that machines should be transparent about how they make decisions, it is less clear whether they should be transparent about who they are.”
It will be important to ensure there is also a framework in place for designing machine learning algorithms to make it easier for designers to specify safety and fairness constraints for robots. This will help build trust between humans and machines.
The research has been published in the journal Nature Machine Intelligence. The research paper is titled “Behavioural evidence for a transparency–efficiency tradeoff in human–machine cooperation.”
More about Automation, Bots, Machines, Call center
Latest News
Top News