Email
Password
Remember meForgot password?
    Log in with Twitter

article imageIs there racial and gender bias in Amazon Rekognition AI?

By Tim Sandle     Feb 10, 2019 in Technology
A new study using algorithmic auditing, which is a key strategy to expose systematic biases embedded in software platforms, has raised concerns with Amazon's Rekognition AI in terms of racial and gender bias.
Reported by the New York Times, new tests of facial recognition technology suggest that Amazon’s system has more difficulty identifying the gender of female and darker-skinned faces compared with similar facial recognition technology services provided by IBM and Microsoft. Amazon's Rekognition is a software application that sets out to identify specific facial features by comparing similarities in a large volume of photographs.
The study is of importance, given that Amazon has been marketing its facial recognition technology to police departments and federal agencies, presenting the technology as an additional tool to aid those tasked with law enforcement to identify suspects more rapidly. This tendency has been challenged by the American Civil Liberties Union (See: "Orlando begins testing Amazon's facial recognition in public").
The new study comes from Inioluwa Deborah Raji (University of Toronto) and Joy Buolamwini (Massachusetts Institute of Technology) and it is titled "Actionable Auditing: Investigating the Impact of Publicly Naming Biased Performance Results of Commercial AI Products." The audit approach in the paper is called 'Gender Shades', described as the first algorithmic audit of gender and skin type performance disparities in commercial facial analysis model.
Expanding on the research on Medium, Buolamwini says "The main message is to check all systems that analyze human faces for any kind of bias. If you sell one system that has been shown to have bias on human faces, it is doubtful your other face-based products are also completely bias free."
Amazon has refuted the claims and states that its facial recognition software has been rigorously tested against over one million faces, although the demographic or phenotypic (skin type) composition of this benchmark has not, according to Buolamwini, been revealed.
Coming out in support of Amazon is the Information Technology and Innovation Foundation, a U.S. nonprofit public policy think tank. The think tank states that the research did not use Rekognition in the way the company has instructed police to use it. However, Gizomodo reports that police services are not necessarily always using the software as Amazon recommends, which supports a wider interpretation of the software's design.
Furthermore, Buolamwini retorts to Amazon: "What I find interesting is that they company makes no mention of the letter that I sent to Jeff Bezos in June 2018 warning of racial and gender bias in our preliminary studies of Amazon Rekognition. Amazon had significantly more leeway that is peers IBM and Microsoft to course correct, and still they failed on our very easy test in August 2018."
She adds: "Even if the company made an update in November 2018, they knew well before there was documented evidence of bias of systems the had already been selling. Based on this letter, it is clear we make a distinction between gender classification and facial identification - a type of facial recognition."
This is not the first time that concerns have been raised with Amazon's technology. As Digital Journal reported, in December 2018 a letter was written by eight U.S. lawmakers requesting that Amazon chief executive Jeff Bezos explain how the company’s technology works and where it will be used.
There is also a growing concern about the use of facial recognition technology in general. The Verge reports that San Francisco lawmaker Aaron Peskin is to introduce legislation that would make the city the first in the nation to ban the government use of facial recognition technology.
More about Amazon Rekognition, Artificial intelligence, Facial recognition
More news from
Latest News
Top News