Connect with us

Hi, what are you looking for?

Tech & Science

Recommendations for building public trust in AI

Without such measures a “crisis of trust” will build, and this will limit the implementation of societal beneficial technologies.

Lifeguards in the city of Ashdod on Israel's Mediterranean coast are trialling an artificial intelligence programme they hope will help cut drowning deaths - Copyright AFP/File STR
Lifeguards in the city of Ashdod on Israel's Mediterranean coast are trialling an artificial intelligence programme they hope will help cut drowning deaths - Copyright AFP/File STR

How can public trust in artificial intelligence be built? There is no simple answer and any approach likely to be multifaceted. One means is to reassure the public that bias has been reduced. But how widespread is bias in the first place?

According to a University of Cambridge report artificial intelligence (AI), as social construct designed by humans, contains biases that reflect modern society. The full-extent of bias is unknown. Given it is unknown, better detection methods are required. The researchers recommend that communities of ethical hackers are needed to prevent a looming ‘crisis of trust’.

It is likely that as AI advances it will require a significant effort to instil in it a sense of morality, operate in full transparency and provide education about the opportunities it will create for business and the public sector

With the public sector, one area where AI is being tested is law enforcement. For example, one model is being used to predict crime by looking at the time and spatial coordinates of discrete events and detecting patterns to predict future events.

The idea is that the global hacker ‘red team’ (or what is called ‘white hat’ hacking) would hunt for algorithmic biases. These teams would stress-test the harm potential of new AI products in order to earn the trust of governments and the public. The recommendation is that companies building intelligent technologies should harness such  techniques.

Without such measures a “crisis of trust” will build, and this will limit the implementation of societal beneficial technologies like driverless cars and autonomous drones.

The researchers do not think sufficient testing will happen spontaneously and they argue that incentives to increase trustworthiness are required. Appropriate teams would be called in to attack any new AI, or strategise on how to use it for malicious purposes, in order to reveal any weaknesses or potential for harm.

Given that not many companies have internal capacity to “red team” there is a need for a third-party community. A global resource could also aid research labs developing AI.

The model would involve financially rewarding any researcher who uncovers flaws in AI that have the potential to compromise public trust or safety. Examples include racial or socioeconomic biases in algorithms, such as those used for medical or recruitment purposes.

The research appears in the journal Science, titled “Filling gaps in trustworthy development of AI.”

Avatar photo
Written By

Dr. Tim Sandle is Digital Journal's Editor-at-Large for science news. Tim specializes in science, technology, environmental, business, and health journalism. He is additionally a practising microbiologist; and an author. He is also interested in history, politics and current affairs.

You may also like:

Tech & Science

OpenAI released its latest artificial intelligence models on Thursday, shrugging off worries about how it will cash in on massive spending.

Tech & Science

Is AI moving into the medical mainstream?

Entertainment

Actor Orion Smith ("The Conjuring: Last Rites") chatted about starring in the new short film "Higher Education," and working opposite Elizabeth LiMei.

Life

More than half of the world's completely unvaccinated children live in just eight countries, research finds - Copyright AFP John WESSELSA new analysis by...