Email
Password
Remember meForgot password?
    Log in with Twitter

article imageArtificial intelligence is showing racial and gender biases

By Tim Sandle     Apr 14, 2017 in Technology
Bath - Computers and apps are increasingly adopting 'artificial intelligence', at least to the level where human speech can be understood. Have such devices been trained on data sets that do not include a diverse range of people?
Concerns have recently been expressed that many of the algorithms that are increasingly making decisions about our lives are trained on data sets that do not include a diverse range of people. This includes apps and computer programs that fall within the broad definition of 'artificial intelligence', and this includes smart devices that recommend the types of television shows we watch to voice activated software. 'Artificial intelligence' (AI) is applied when a machine mimics "cognitive" functions that humans associate with other human minds, such as "learning" and "problem solving". Innovative as many of these functions are, they are not "intelligence" as might be measured via the Turing Test (a process of formal reasoning whereby a machine's ability to exhibit intelligent behavior equivalent to, or indistinguishable from, that of a human).
Nevertheless, machines and programs that can adapt to what humans are saying and which attempt to preempt our thoughts and actions are commonplace. While many of these represent technological advancement, technologists as well as sociologists are starting to question whether biases on the part of the programmers are becoming apparent in the way that the programs execute their functions.
This has been raised by Joanna Bryson who is a computer scientist at the University of Bath. Dr. Bryson told The Guardian: “A lot of people are saying this is showing that AI is prejudiced. No. This is showing we’re prejudiced and that AI is learning it.”
According to Live Science, psychologists have established that the human brain makes associations between words based on their underlying meanings. Using a method called the Implicit Association Test, psychologists assess user reaction times to examine associations with images. Here repeated tests have shown an object like flowers are rapidly associated with positive concepts; while weapons, for example, are more quickly associated with negative concepts.
It follows, according to Bryson that AI has the potential to reinforce existing biases in programs; however, programs can't make the types of distinctions that humans can. This is because, unlike humans, algorithms are not equipped to consciously counteract learned biases. She adds: “A danger would be if you had an AI system that didn’t have an explicit part that was driven by moral ideas, that would be bad."
Dr. Bryson has written a paper, with colleagues, exploring these issues for the journal Science. The specific focus is on a machine learning tool called “word embedding”. This program is starting to transform the way computers interpret speech and text and it might become the basis for machines to develop human-like abilities such as common sense and logic and thus pass the Turing Test.
The research paper is titled "An AI stereotype catcher."
More about Artificial intelligence, Computers, Technology
More news from