Email
Password
Remember meForgot password?
    Log in with Twitter

article imageAI ethics: How can AI be made more trustworthy?

By Tim Sandle     Sep 8, 2020 in Technology
To what extent do the public trust artificial intelligence? Where trust is lacking, what can be done to strengthen AI predictions in order to bolster the level of trust and push forward AI adoption further.
Artificial intelligence is growing in scope and with its ability to draw meaningful inferences and to make accurate predictions from data. Notable examples include racking climate changes and making predictions about environmental conditions; within the medical field for tracking disease outbreaks (and seeking to minimize the impact of pathogens on populations); and to used to assess the learning capability of students and develop tailored scholastic programs.
There are many, however, who are distrustful of AI, especially when it comes to life and death decisions. Artificial intelligence is not 'good' or 'bad'; it is simply the product of the people who designed it and those who choose to apply it. One area where here is potential for mistrust is with autonomous vehicles.
With autonomous vehicles the potential benefits include reductions in pollution, improved road safety, and reduced traffic congestion. Yet there are risks that a vehicle may not respond in the way intended, and there are those who would simply not want to take their hand off of the wheel due to a concern about how the vehicle will react to a traffic related event.
Plus there are ethical issues: should a self-driving car take the decision to hit a person (less damage to the car) or veer into a wall (more damage to the car)? This is not simply a hypothetical question, given a case involving Uber in 2019.
To raise the level of trust in AI, U.S. researchers, from the University of Southern California Viterbi Engineering, have developed a way to assess whether data and predictions generated by AI algorithms are trustworthy, with the outcome being generated automatically.
To model this, autonomous car scenarios have been used. The researchers examined the question: "Can we trust the computer software within the vehicles to make sound decisions within fractions of a second -- especially when conflicting information is coming from different sensing modalities such as computer vision from cameras or data from LIDAR?" (LIDAR is a method for measuring distances by illuminating the target with laser light and measuring the reflection with a sensor).
To assess this and other matters of trust, the research group developed a system called DeepTrust. This is a computer model that can quantify the amount of uncertainty in relation to an AI-generated decision. The program was developed over a two-year period using subjective logic to assess the architecture of the neural networks. Subjective logic refers to a type of probabilistic logic that explicitly takes epistemic uncertainty and source trust into account.
The aim is to make the tool available to other research groups, so that the reliability (and ultimately trustfulness) of AI can be evaluated and improved, with the aim of improving the confidence of individuals to AI-decision led concepts.
The research paper is titled: "There Is Hope After All: Quantifying Opinion and Trustworthiness in Neural Networks" and it is published in the journal Frontiers in Artificial Intelligence.
More about Artificial intelligence, Trust, Ethics, machine learning
More news from
Latest News
Top News