Connect with us

Hi, what are you looking for?

Tech & Science

AI: The ethical problem of picking the right answer but not understanding why

AI can learn from fraud patterns as data sets improve, it remains that algorithms lack the capability to adequately explain why a given situation is fraudulent.

Power hungry datacenters needed to power artificial intelligence are making it more challenging for tech giant's to meet goals of curbing greenhouse gas emissions from their operations
Power hungry datacenters needed to power artificial intelligence are making it more challenging for tech giant's to meet goals of curbing greenhouse gas emissions from their operations - Copyright INDONESIAN PRESIDENTIAL PALACE/AFP Handout
Power hungry datacenters needed to power artificial intelligence are making it more challenging for tech giant's to meet goals of curbing greenhouse gas emissions from their operations - Copyright INDONESIAN PRESIDENTIAL PALACE/AFP Handout

Are we putting too much trust in a technology we do not fully understand? AI systems are increasingly making decisions impacting the daily lives of humans, cross crossing areas from banking and healthcare to crime detection. Moreover, AI is becoming integrated into ‘high-stakes’ decisions that can have life-altering consequences

While the detailed aspects of AI technology are undecipherable for the majority, society can at least insist that AI models are designed and evaluated using a system based on transparency and trustworthiness. This approach is the subject of a review undertaken by the University of Surrey.

Taking fraud as an example. Many banks are using advanced algorithms to assess for fraud, having trained their AI using fraud datasets. However, even where a databased is imbalanced – say by just 0.01 percent – this will lead to damage on the scale of billions of dollars.

Furthermore, while AI can learn from fraud patterns as data sets improve, it remains that algorithms lack the capability to adequately explain why a given situation is fraudulent. This ability to reason becomes arguably more pressing in the medical context.

Dr Wolfgang Garn, co-author of the study and Senior Lecturer in Analytics at the University of Surrey, states: “We must not forget that behind every algorithm’s solution, there are real people whose lives are affected by the determined decisions. Our aim is to create AI systems that are not only intelligent but also provide explanations to people — the users of technology — that they can trust and understand.”

As well as understanding, there is also cause to question the ability of AI to empathise with patients’ values and to effectively navigate human relationships as mediators. This becomes of greater importance as human interaction decreases relative to an AI application, especially in healthcare.

Hence, a detailed ethical and legal code is required; one focused equally on inputs and outputs.

Garn proposes a comprehensive framework known as SAGE (Settings, Audience, Goals, and Ethics) to address these critical issues. SAGE is designed to ensure that AI explanations are not only understandable but also contextually relevant to the end-users. The SAGE framework seeks to bridge the gap between complex AI decision-making processes and the human operators who depend on them.

Connecting SAGE to Scenario-Based Design (SBD) techniques, which delve deep into real-world scenarios to find out what users truly require from AI explanations, can lead to an improved form of AI that imparts better quality contextual data to the user.

While clear output is important, AI models need to explain their outputs in a text form or graphical representations in a way that meets the diverse comprehension needs of users. 

In making this recommendation, Garn calls for AI developers to engage with industry specialists and end-users actively. He says: “The path to a safer and more reliable AI landscape begins with a commitment to understanding the technology we create and the impact it has on our lives. The stakes are too high for us to ignore the call for change.”

The research appears in the journal Applied Artificial Intelligence, titled “Real-World Efficacy of Explainable Artificial Intelligence using the SAGE Framework and Scenario-Based Design.”

Avatar photo
Written By

Dr. Tim Sandle is Digital Journal's Editor-at-Large for science news. Tim specializes in science, technology, environmental, business, and health journalism. He is additionally a practising microbiologist; and an author. He is also interested in history, politics and current affairs.

You may also like:

Tech & Science

An imposter posing as US Secretary of State Marco Rubio sent AI-generated voice and text messages to high-level officials and foreign ministers.

Business

German exports to the United States plummeted in May, official data showed.

Life

US troops are found in almost every country on the planet, with some places having a greater concentration of soldiers than others.

World

When the Trump-Musk feud blew up last month, Musk alleged that Trump was named in the Epstein files.