http://www.digitaljournal.com/tech-and-science/technology/microsoft-talks-about-embarrassing-bias-in-ai-powered-apps/article/503682

Microsoft talks about 'embarrassing' bias in AI-powered apps

Posted Sep 28, 2017 by James Walker
Biased AI risks jeopardising the technology's potential as the enabler of a new generation of connected services and devices. In a new blog post, Microsoft's design team discussed how bias occurs and ways to avoid it, but there is no easy solution.
Microsoft Seeing AI
Microsoft Seeing AI
Microsoft
Machine learning algorithms are susceptible to bias because of the way they are trained. Most current generation models have to be fed vast amounts of training data before they can operate independently. Any biases in the training dataset end up being reflected in the operation of the model, because it's never "learnt" any other approach to its task.
The impact of biased AI could have deep consequences for society. Whether the bias results in misclassification of images in a photo app or rejection of an insurance claim approval, machines that discriminate could eradicate human trust in artificial intelligence and tear apart communities.
Inclusive design
In a post on Medium, Microsoft's Inclusive Design team argued one way to tackle the problem is to maintain an inclusive mindset throughout software design. Teams must understand how bias occurs so they can identify potential entrypoints ahead of time. When the AI model is created, the risk areas that were previously found should be scrutinised to ensure data is always handled impartially. Creating inclusive AI hinges on recognition of how it infects the machine learning tech.
READ NEXT: Microsoft announces new AI tools for digital transformation
"Bias in AI will happen unless it's built from the start with inclusion in mind," said Microsoft. "The most critical step in creating inclusive AI is to recognize where and how bias infects the system."
Five steps to inclusivity
Microsoft has worked with industry leaders and academic groups to create a method of identifying biased AI. The five-step technique uses metaphors based on childhood situations. The scenarios could be used by teams building AI to recognise any bias that creeps into their models during development.
The first situation addresses the issue of training data not representing the level of diversity in an AI model's user base. The second one concerns the secondary impact of this, when a model's training reinforces cultural biases such as gender assumptions. Microsoft encouraged teams to challenge any associations they inadvertently create, removing gender or ethnicity-based data labels unless they're directly relevant to the application.
The remaining scenarios look at AI accountability, human interference with AI models and oversimplification of subjective thought processes. Combined, the five questions are meant to give firms a starting point when they need to identify bias in their algorithms.
READ NEXT: Slack uses AI to combat messaging overload by analysing language
Microsoft said the situations were chosen because they're universally relatable and they fit into a broader overall metaphor of AI still being in its infancy. The company said the issues they express are fairly common occurrences in the industry, with "most people" having anecdotal experience of significant AI bias. It said that publicly releasing biased models could create "embarrassing, offensive outcomes" that are detrimental to society and the industry.
AI inclusion, accountability and ethics are areas of the technology that will be essential to its long-term presence in society. The questions they present can be more challenging to address than actually building machine learning models. Several companies are exploring AI ethics codes that could help standardise what's acceptable. AI development currently continues apace though, with responsibility for defining acceptable bias levels and eradicating ethical assumptions falling on individual teams.