Various forms of artificial intelligence has become increasingly biased because of the way they are trained. This is according to Stanford University’s Artificial Intelligence Index Report 2022. The AI Index Report tracks, collates, distils, and visualizes data relating to artificial intelligence. Its mission is to provide unbiased, rigorous, and comprehensive data for policymakers, researchers, journalists, executives, and the general public to develop a deeper understanding of the complex field of AI.
In the report, AI models across the board are setting new records on technical benchmarks. For example, a 280 billion parameter model developed in 2021 shows a 29 percent increase in elicited toxicity over a 117 million parameter model considered the state of the art as of 2018.
Despite these technological leaps, the data also shows that larger models are also more capable of reflecting biases from their training data.
The main area of bias called out in the report is with large language models. As these systems grow significantly more capable over time, though as they increase in capabilities, so does the potential severity of their biases.
The key is effective training, according to an assessment of the Stanford study undertaken in Fortune magazine. For once trained properly, AI can be taught differences, and can then serve as the catalyst to simplify countless daily tasks.
According to Ricardo Amper, CEO, Incode, an AI-based digital identity company that builds secure biometric identity products it is with training that companies seeking to develop AI models should be investing in.
Amper explains to Digital Journal: “AI mechanisms operate as blank canvases and are trained on what to recognize when verifying digital identities.”
Consequently, AI systems can exhibit biases that stem from their programming and data sources; for example, machine learning software could be trained on a dataset that underrepresents a particular gender or ethnic group, as examples.
As an example, Amper says: “Digital authentication technology can only work when AI is fed gender neutral and diverse identities in order to effectively recognize a person’s biometric features.”
He adds that: “Unbiased recognition starts with the way technology is trained, and it starts with enabling the technology to evaluate all genders and ethnicities upon its conception.”
As AI becomes more mainstream, algorithmic fairness and bias will continue to shift from being primarily an academic pursuit to becoming firmly embedded as an industrial research topic with wide-ranging implications.