How can public trust in artificial intelligence be built? There is no simple answer and any approach likely to be multifaceted. One means is to reassure the public that bias has been reduced. But how widespread is bias in the first place?
According to a University of Cambridge report artificial intelligence (AI), as social construct designed by humans, contains biases that reflect modern society. The full-extent of bias is unknown. Given it is unknown, better detection methods are required. The researchers recommend that communities of ethical hackers are needed to prevent a looming ‘crisis of trust’.
It is likely that as AI advances it will require a significant effort to instil in it a sense of morality, operate in full transparency and provide education about the opportunities it will create for business and the public sector
With the public sector, one area where AI is being tested is law enforcement. For example, one model is being used to predict crime by looking at the time and spatial coordinates of discrete events and detecting patterns to predict future events.
The idea is that the global hacker ‘red team’ (or what is called ‘white hat’ hacking) would hunt for algorithmic biases. These teams would stress-test the harm potential of new AI products in order to earn the trust of governments and the public. The recommendation is that companies building intelligent technologies should harness such techniques.
Without such measures a “crisis of trust” will build, and this will limit the implementation of societal beneficial technologies like driverless cars and autonomous drones.
The researchers do not think sufficient testing will happen spontaneously and they argue that incentives to increase trustworthiness are required. Appropriate teams would be called in to attack any new AI, or strategise on how to use it for malicious purposes, in order to reveal any weaknesses or potential for harm.
Given that not many companies have internal capacity to “red team” there is a need for a third-party community. A global resource could also aid research labs developing AI.
The model would involve financially rewarding any researcher who uncovers flaws in AI that have the potential to compromise public trust or safety. Examples include racial or socioeconomic biases in algorithms, such as those used for medical or recruitment purposes.
The research appears in the journal Science, titled “Filling gaps in trustworthy development of AI.”