Connect with us

Hi, what are you looking for?

Tech & Science

NYC passes bill to study algorithm bias in city agencies

The New York City Council last week passed the Algorithmic Accountability Bill, which mandates “the creation of a task force that provides recommendations on how information on agency automated decision systems may be shared with the public and how agencies may address instances where people are harmed” by such algorithmic systems.

The measure, which is expected to be signed by Mayor Bill De Blasio by the end of the month, seeks to determine whether algorithms used by city agencies disproportionately affect people “based upon age, race, creed, color, religion, national origin, gender, disability, marital status, partnership status, caregiver status, sexual orientation… or citizenship status.” It aims to foster greater knowledge of how the city uses algorithms, as well as to increase transparency around their usage.

The American Civil Liberties Union (ACLU) says the “task force will be made up of experts on transparency, fairness, and staff from non-profits that work with people most likely to be harmed by flawed algorithms.” It will issue a set of recommendations “addressing when and how algorithms should be made public, how to assess whether they are biased, and the impact of such bias.”

The bill’s sponsor, Councilman James Vacca (D-Bronx), said he was inspired by an investigation by the nonprofit investigative journalism outlet ProPublica that found the software used across the country to predict future criminals is biased against black people.

“My ambition here is transparency, as well as accountability,” Vacca told ProPublica.

An algorithm is a set of instructions designed to perform a specific task. Computer algorithms are increasingly informing decisions ranging from which products are advertised to consumers and which people will be offered bank credit to where children can go to school or who will be granted a job interview. Algorithms can even predict where crimes will occur, a scenario unimaginable outside the realm of science fiction films just a few years ago.

However, algorithms — which are often believed to be free from the bias so often shown by people — are susceptible to human bias, which can have disastrous consequences.

Recently, a federal judge unsealed the source code for the Forensic Statistical Tool (FST), used to determine the likelihood of the presence of a defendant’s DNA, following serious questions about its accuracy. Thousands of criminal cases in New York City relied upon the disputed DNA testing techniques.

Furthermore, algorithms used in facial recognition technology have been shown to be less accurate on black people, women and juveniles, raising the disturbing possibility that innocent people will be wrongfully arrested.

In the case of the previously mentioned crime forecasting technology, it has been plagued by incorrect predictions biased against black people. The New York Police Department (NYPD) plans to spend some $45 million over the next five years developing and deploying a controversial predictive policing program, which critics claim reinforces existing biases and contributes to over-policing of targeted areas and populations.

Computer science experts have been warning of biased algorithms for some time. Earlier this year, a group of researchers at New York University (NYU), working with the ACLU, launched the AI Now initiative “to ensure that AI (artificial intelligence) systems are sensitive and responsive to the complex social domains in which they are applied” by finding “new ways to measure, audit, analyze, and improve them.”

AI Now co-founders Kate Crawford and Meredith Whittaker, who are also researchers at Microsoft and Google respectively, note that bias is present in a variety of algorithmic products and services.

“It’s still early days for understanding algorithmic bias,” Crawford and Whittaker told MIT Technology Review. “Just this year we’ve seen more systems that have issues, and these are just the ones that have been investigated.”

Among this year’s most prominent cases of algorithmic bias were a flawed teacher ranking system and a gender-biased model for natural language processing.

Despite the growing threat of machine bias, Motherboard reports the industry group Tech:NYC, which lobbies on behalf of companies including Google, AOL and AirBnB as well as many smaller startups, opposes the release of proprietary algorithms mandated under the new New York legislation, with group policy director Taline Sanassarian arguing such action could have a “chilling effect” that would discourage companies from working with city officials. However, Joshua Norkin, an attorney with the New York Legal Aid Society, the nation’s oldest and largest legal aid service for poor people, countered that the city should prioritize people over corporations.

“The idea that there’s some obligation on behalf of the government who represents the people to protect proprietary algorithms is totally ridiculous,” Norkin told Motherboard. “There is absolutely nothing that obligates the city to look out for the interests of a private company looking to make a buck on a proprietary algorithm.”

Written By

You may also like:

Business

For SMBs, the move to cloud accounting is part of a larger shift in how businesses are retooling their operations

Entertainment

Global pop music stars Lady Gaga and Bruno Mars won the 2025 Grammy Award for "Best Pop Duo/Group Performance" for "Die with a Smile."

Tech & Science

The best way to stay safe while gaming is to treat your online security like you would in real life.

World

President Donald Trump's order to suspend US foreign assistance could have devastating impacts in Africa, critics warn - Copyright AFP YASUYOSHI CHIBADan LAWLER and...