Artificial Intelligence systems are only as good as the data we put into them,” notes a recent IBM article on human bias in AI systems.
Machines are not technically biased, but when data imputed to them is biased in some way, then any data regurgitated by the machine retains that deficiency. And until now hardly anyone has tried to solve this huge problem.
As IBM explains, the largest issue has arisen from bad data that can contain implicit racial, gender or ideological biases. But IBM also believes that bias can be tamed and that the AI systems that will tackle bias will be the most successful.
It’s all in the algorithmic model
There are over 180 human biases that have been defined and classified, and any one of them can affect how we make decisions. According to The Next Web, there are also “confirmation biases” (when a person accepts a result because it confirms a previous belief) or “availability biases” (placing greater emphasis on information relevant to the individual than equally valuable information of less familiarity). All of these can compound through the use of biased data sets, affecting the quality of work and the intended functions of AI.
A team of scientists from the Czech Republic and Germany recently completed a study on bias and AI. The research concludes that when human mistakes become part of the selection of a training rule that shapes the creation of a machine learning model, then we are not really creating artificial intelligence, we are just highlighting our own flawed observations.
Satya Nadella, Microsoft’s CEO, penned an article on the partnership between humans and AI, noting that the most productive debate isn’t whether AI is good or evil, but about “the values instilled in the people and institutions creating this technology.”
He noted six principles needed to be discussed and debated by industry and society alike as we delve further into AI. They include: fairness; reliability and safety; privacy and security; inclusiveness; transparency; and accountability. The application of these principles also entails weeding out bias, either intentional or unintentional.
Taking responsibility for the data
Here is an example of an unintended bias based on data put into a machine-learning system: An AI model for hiring recommendations is trained solely on the existing data and past employees. What if those employees are not diverse? Maybe they are all young white males? The resulting model would likely be unfairly biased against candidates who are older, racial minorities, and female.
The responsibility to eliminate bias in AI is going to be front-and-center as more AI-powered systems come into play throughout society. We will soon see vehicles operated by machines and a large number of surgeries and medical procedures will be conducted by robots. That’s going to put AI developers in the spotlight when tragedy strikes and people look for someone to blame.
The MIT-IBM Watson AI Lab believes it is essential to mitigate bias in artificial intelligence systems if we are to build trust between humans and machines that learn. As the article says, “In the process of recognizing our bias and teaching machines about our common values, we may improve more than AI. We might just improve ourselves.