Connect with us

Hi, what are you looking for?

Tech & Science

Effective training is the best step to reduce AI bias

AI can help identify and reduce the impact of human biases, but it can also make the problem worse.

AI: More than Human exhibition invites you to explore our relationship with artificial intelligence. — © Tim Sandle
AI: More than Human exhibition invites you to explore our relationship with artificial intelligence. — © Tim Sandle

Various forms of artificial intelligence has become increasingly biased because of the way they are trained. This is according to Stanford University’s Artificial Intelligence Index Report 2022. The AI Index Report tracks, collates, distils, and visualizes data relating to artificial intelligence. Its mission is to provide unbiased, rigorous, and comprehensive data for policymakers, researchers, journalists, executives, and the general public to develop a deeper understanding of the complex field of AI.

In the report, AI models across the board are setting new records on technical benchmarks. For example, a 280 billion parameter model developed in 2021 shows a 29 percent increase in elicited toxicity over a 117 million parameter model considered the state of the art as of 2018.

Despite these technological leaps, the data also shows that larger models are also more capable of reflecting biases from their training data.

The main area of bias called out in the report is with large language models.  As these systems grow significantly more capable over time, though as they increase in capabilities, so does the potential severity of their biases.

The key is effective training, according to an assessment of the Stanford study undertaken in Fortune magazine. For once trained properly, AI can be taught differences, and can then serve as the catalyst to simplify countless daily tasks.

According to Ricardo Amper, CEO, Incode, an AI-based digital identity company that builds secure biometric identity products it is with training that companies seeking to develop AI models should be investing in.

 Amper explains to Digital Journal: “AI mechanisms operate as blank canvases and are trained on what to recognize when verifying digital identities.”

Consequently, AI systems can exhibit biases that stem from their programming and data sources; for example, machine learning software could be trained on a dataset that underrepresents a particular gender or ethnic group, as examples.

As an example, Amper says: “Digital authentication technology can only work when AI is fed gender neutral and diverse identities in order to effectively recognize a person’s biometric features.”

He adds that: “Unbiased recognition starts with the way technology is trained, and it starts with enabling the technology to evaluate all genders and ethnicities upon its conception.”

As AI becomes more mainstream, algorithmic fairness and bias will continue to shift from being primarily an academic pursuit to becoming firmly embedded as an industrial research topic with wide-ranging implications.

Avatar photo
Written By

Dr. Tim Sandle is Digital Journal's Editor-at-Large for science news. Tim specializes in science, technology, environmental, business, and health journalism. He is additionally a practising microbiologist; and an author. He is also interested in history, politics and current affairs.

You may also like:

World

US President Joe Biden delivers remarks after signing legislation authorizing aid for Ukraine, Israel and Taiwan at the White House on April 24, 2024...

World

AfD leaders Alice Weidel and Tino Chrupalla face damaging allegations about an EU parliamentarian's aide accused of spying for China - Copyright AFP Odd...

Business

Meta's growth is due in particular to its sophisticated advertising tools and the success of "Reels" - Copyright AFP SEBASTIEN BOZONJulie JAMMOTFacebook-owner Meta on...

Business

Tony Fernandes bought AirAsia for a token one ringgitt after the September 11 attacks on the United States - Copyright AFP Arif KartonoMalaysia’s Tony...