Are we putting too much trust in a technology we do not fully understand? AI systems are increasingly making decisions impacting the daily lives of humans, cross crossing areas from banking and healthcare to crime detection. Moreover, AI is becoming integrated into ‘high-stakes’ decisions that can have life-altering consequences
While the detailed aspects of AI technology are undecipherable for the majority, society can at least insist that AI models are designed and evaluated using a system based on transparency and trustworthiness. This approach is the subject of a review undertaken by the University of Surrey.
Taking fraud as an example. Many banks are using advanced algorithms to assess for fraud, having trained their AI using fraud datasets. However, even where a databased is imbalanced – say by just 0.01 percent – this will lead to damage on the scale of billions of dollars.
Furthermore, while AI can learn from fraud patterns as data sets improve, it remains that algorithms lack the capability to adequately explain why a given situation is fraudulent. This ability to reason becomes arguably more pressing in the medical context.
Dr Wolfgang Garn, co-author of the study and Senior Lecturer in Analytics at the University of Surrey, states: “We must not forget that behind every algorithm’s solution, there are real people whose lives are affected by the determined decisions. Our aim is to create AI systems that are not only intelligent but also provide explanations to people — the users of technology — that they can trust and understand.”
As well as understanding, there is also cause to question the ability of AI to empathise with patients’ values and to effectively navigate human relationships as mediators. This becomes of greater importance as human interaction decreases relative to an AI application, especially in healthcare.
Hence, a detailed ethical and legal code is required; one focused equally on inputs and outputs.
Garn proposes a comprehensive framework known as SAGE (Settings, Audience, Goals, and Ethics) to address these critical issues. SAGE is designed to ensure that AI explanations are not only understandable but also contextually relevant to the end-users. The SAGE framework seeks to bridge the gap between complex AI decision-making processes and the human operators who depend on them.
Connecting SAGE to Scenario-Based Design (SBD) techniques, which delve deep into real-world scenarios to find out what users truly require from AI explanations, can lead to an improved form of AI that imparts better quality contextual data to the user.
While clear output is important, AI models need to explain their outputs in a text form or graphical representations in a way that meets the diverse comprehension needs of users.
In making this recommendation, Garn calls for AI developers to engage with industry specialists and end-users actively. He says: “The path to a safer and more reliable AI landscape begins with a commitment to understanding the technology we create and the impact it has on our lives. The stakes are too high for us to ignore the call for change.”
The research appears in the journal Applied Artificial Intelligence, titled “Real-World Efficacy of Explainable Artificial Intelligence using the SAGE Framework and Scenario-Based Design.”
