Connect with us

Hi, what are you looking for?

Tech & Science

Machines appear more persuasive when pretending to be human

With the research, scientists at Max-Planck-Gesellschaft and New York University set out sought to assess if cooperation between humans and machines alters when the machine purports to be human. By running an experiment where humans interacted with bots, the scientists found that bots are better than humans in certain human-machine interactions. However, this only happened when robots were allowed to hide their non-human identity.

The research involved 700 volunteer subjects to take part in an online cooperation game (“prisoner’s dilemma”), where they needed to interact with a human or an artificial partner. For the game, the players can either act egotistically to exploit the other player, or act cooperatively creating advantages for both sides.

The main aspect of the experiment was where researchers gave selected subjects false information in relation to their gaming partner’s identity. Some subjects were told the player they were interacting with was a bot; other players were told they were up against another human.

The variations as to who the opponent was enabled the scientists to assess if humans are prejudiced against gaming partners based on a human or robot identity.

The research revealed that machines impersonating humans were more successful in convincing their gaming partners to cooperate, provided the person playing thought the robot was human. Where the human player became aware they were up against a machine, the cooperation rates fell away.

In practice this might suggest that a help-desk run by robots could be more efficient than desks run by people, but only if the identity of the robots was unknown.

According to lead researcher, Dr. Talal Rahwan: “Although there is broad consensus that machines should be transparent about how they make decisions, it is less clear whether they should be transparent about who they are.”

It will be important to ensure there is also a framework in place for designing machine learning algorithms to make it easier for designers to specify safety and fairness constraints for robots. This will help build trust between humans and machines.

The research has been published in the journal Nature Machine Intelligence. The research paper is titled “Behavioural evidence for a transparency–efficiency tradeoff in human–machine cooperation.”

Avatar photo
Written By

Dr. Tim Sandle is Digital Journal's Editor-at-Large for science news. Tim specializes in science, technology, environmental, business, and health journalism. He is additionally a practising microbiologist; and an author. He is also interested in history, politics and current affairs.

You may also like:

Business

Meta's growth is due in particular to its sophisticated advertising tools and the success of "Reels" - Copyright AFP SEBASTIEN BOZONJulie JAMMOTFacebook-owner Meta on...

World

The world's biggest economy grew 1.6 percent in the first quarter, the Commerce Department said.

Business

Electric cars from BYD, which topped Tesla as the world's top seller of EVs in last year's fourth quarter, await export at a Chinese...

World

Former US President Donald Trump attends his trial for allegedly covering up hush money payments linked to extramarital affairs - Copyright AFP PATRICIA DE...