Email
Password
Remember meForgot password?
    Log in with Twitter

article imageHow to eliminate social bias from artificial intelligence?

By Tim Sandle     Aug 19, 2017 in Science
Boston - Artificial intelligence is being used to make more decisions that affect our daily lives: whether we get a loan, to mark an exam paper, to analyze evidence in a court case and so on. What happens if the software is socially biased?
Given that computers, software, artificial intelligence, machine learning and other 'intelligent' systems are human creations then it makes sense that such systems may contain social biases. Social biases may include interpretations of gender, ethnicity or social class, for example. Researchers from University of Massachusetts at Amherst have put forward the case that greater care needs to be taken when developing artificial intelligence that social biases are minimized.
Outlining this, Professor Alexandra Meliou, who head up the university's College of Information and Computer Sciences, states: "The increased role of software and the potential impact it has on people's lives makes software fairness a critical property." he adds further that "data-driven software has the ability to shape human behavior: it affects the products we view and purchase, the news articles we read, the social interactions we engage in, and, ultimately, the opinions we form."
As an example the researcher states that ethnic bias exists in online advertising delivery systems. Here, for example, when the first name of a person, where that name is more typically associated with a minority group, this is more likely to produce adverts for services to support those who have been arrested for criminal activities. In another test, the researcher sound that a decision-tree-based machine learning approach, which the designers had said does not to discriminate against gender, was actually discriminating against women for 11 percent of the time.
To aid developers in spotting social bias, Professor Meliou have created a new technique termed "Themis." The aim is to automatically test software for discrimination. It is hoped that Themis will inform stakeholders to better understand software behavior; and also to assess when unwanted bias is present. These measures should, over the long-term, improve artificial intelligence and eliminate bias. In Greek mythology, Themis was an ancient Greek Titaness, described as the Lady of good counsel, and is the personification of divine order, fairness, law, natural law, and custom.
The new test is to be discussed at the European Software Engineering Conference (ESEC/FSE 2017) which is taking place in September 2017 in Paderborn, Germany.
More about Artificial intelligence, social bias, Software, machine learning
More news from
Latest News
Top News