Remember meForgot password?
    Log in with Twitter

article imageArtificial Conversational Entities: Can A Machine Act Human and Be Given 'Rights'?

By Susan Duclos     Oct 5, 2008 in Technology
An annual test will start next Sunday to determine if computers can think well enough to fool judges into believing the computer is human during the "Turing Test." If machines are determined to have a "consciousness" will they then be given "rights"?
Next Sunday six human "interrogators" will sit at six computer split screens with one side of each screen being a human and the other side being computer programs aka "artificial conversational entities". The human "interrogators will then begin text-based conversations on any subject they choose.
Five minutes after the conversations start, the judges will then be asked to determine which answers came from a human and which came from the artificial conversational entities. If the "interrogators" get the answer wrong, choosing the computer or computers instead of the human, then the program will have fooled them and passed the test.
Turing Test.
The test itself is called the "Turing test", named after mathematician Alan Turing, who asked a specific question half of a century ago. The question was "Can machines think?"
In a 1951 paper, Alan Turing who helped to crack German military codes during WWII, made a proposal of a test called "The Imitation Game," who he hoped would settle the issue of machine intelligence although no intelligent machine was part of the original game.
Originally the test was to include a man, a woman and a judge, all in separate rooms and the judge had to determine which of the two subjects he was communicating with was the male, with the female trying to trick the judge.
Turing them modified that initial game/test and instead of having a man, woman and judge in three rooms, there would be a human, either a man or a woman, a computer and the judge. The judge then would have to determine which of the two being questioned was human or a machine.
More information can be found on the Turing test, here and here.
No Artificial Intelligence machine has ever passed the test to date.
Artificial Intelligence is a technological science and engineering to make intelligent machines, specifically intelligent computer programs.
There are differing opinions about the Turing test, with Professor Kevin Warwick who is a cyberneticist at the University of Reading, where the test is being conducted, the program needs to only to make 30 percent or more of the interrogators unsure of who is human and who is the computer on their split screens, for the computer to have passed the test according to Turing's criteria.
Warwick admits there will be critics of this test, saying "You can be flippant, you can flirt, it can be on anything. I'm sure there will be philosophers who say, "OK, it's passed the test, but it doesn't understand what it's doing".'
Philosopher AC Grayling from Birkbeck College, University of London is one such critic and he states "The test is misguided. Everyone thinks it's you pitting yourself against a computer and a human, but it's you pitting yourself against a computer and computer programmer. AI is an exciting subject, but the Turing test is pretty crude."
The computer programs taking part in the Turing test are Alice, Brother Jerome, Elbot, Eugene Goostman, Jabberwacky and Ultra Hal.
Loebner Prize in Artificial Intelligence.
The creators of these artificial conversational entities are in competition to win an "18-carat gold medal and $100,000 offered by the Loebner Prize in Artificial Intelligence."
Although Turing proposed this testing procedure in 1951, it wasn't until 1991, 40 years later, the test was truly implemented, by Dr. Hugh Loebner, who wanted to see Artificial Intelligence succeed.
Loebner offered $100,000 to the first entrant who could pass the Turing test. There were problems with the initial test in 1991, so in 1995 the contest was re-opened and has become an annual contest that no one has won to date.
Will Alice, Brother Jerome, Elbot, Eugene Goostman, Jabberwacky or Ultra Hal be the first artificial conversational entity to pass the test and bring us one step closer, technology wise, to a machine which can think and act enough like a human being to fool a judge?
Robot Rights.
As the original Guardian article points out, if this test is successful, or perhaps the terminology should be "when" this test is eventually successful, will humans then grapple with the issue of consciousness and whether a computer will have a "conscious" and if so, will humans will have the right to turn them of and on.
It wasn't all that long ago, in December of 2006, a British government-commissioned report sponsored by Sir David King, the UK government’s chief scientist, brought the question of Robot Rights to the attention of the media and was reported on by Financial Times.
According to one of 270 forward looking papers submitted covered the topic of "robots’ rights."
The basic theme is what happens if humans, in their quest for technological advancement, managed to create a machine with a "conscious," and Henrik Christensen who is the director of the Centre of Robotics and Intelligent Machines at the Georgia Institute of Technology stated "If we make conscious robots they would want to have rights and they probably should."
Christensen goes on to make the point "There will be people who can’t distinguish that so we need to have ethical rules to make sure we as humans interact with robots in an ethical manner so we do not move our boundaries of what is acceptable.”
A report called the The Horizon Scan takes this a step further and speaks of specific rights for robots, stating "If granted full rights, states will be obligated to provide full social benefits to them including income support, housing and possibly robo-healthcare to fix the machines over time."
More about Artifical intelligence, Computers, Artificial conversational entities
Latest News
Top News