Remember meForgot password?
    Log in with Twitter

article imageArtificial intelligence — Assessing the decision-making process

By Karen Graham     Mar 12, 2019 in Technology
Scientists are putting AI systems to a test. Researchers have developed a method of assessing the diverse 'intelligence' spectrum observed in current AI systems.
Artificial Intelligence and machine learning algorithms such as Deep Learning have become part of our daily lives. The technology is used in everything: translation services, improving medical diagnostics, personal and enterprise banking and creating computer models in climate science.
Our current machines have been very successful at solving hard application problems, displaying seemingly intelligent behavior. Based on the ever-increasing amount of data and more complicated computers available today, learning algorithms appear to reach human capabilities.
But the question still remains - Users still don't know exactly how AI systems reach their conclusions. Researchers from TU Berlin, Fraunhofer Heinrich Hertz Institute HHI and Singapore University of Technology and Design (SUTD) wanted to know if AI systems were truly intelligent or did they make some lucky guesses.
AI system intelligence
In order to analyze AI systems with a novel technology (nonlinear learning machines) that allows automatized analysis and quantification, the researchers used a technology developed earlier by TU Berlin and Fraunhofer HHI, the so-called Layer-wise Relevance Propagation (LRP).
LRP is an algorithm that allows for visualizing the input variables AI systems rely on to make their decisions. Extending LRP, the novel Spectral relevance analysis (SpRAy) can identify and quantify a wide spectrum of learned decision-making behavior. The researchers say it is now possible, using this system, to detect undesirable decision making even in large sets of data.
This "explainable AI" is an important step towards a practical application of AI, according to Dr. Klaus-Robert Müller, Professor for Machine Learning at TU Berlin. "Specifically in medical diagnosis or in safety-critical systems, no AI systems that employ flaky or even cheating problem-solving strategies should be used."
Basically, with the newly developed algorithms, nearly every AI system can be put to the test while deriving quantitative information from them. This would include the full spectrum of behaviors, from naive problem-solving behavior, to cheating strategies up to highly elaborate "intelligent" strategic solutions.
Wilhelm von Osten and Clever Hans in 1908.
Wilhelm von Osten and Clever Hans in 1908.
Clever Hans Strategies
Dr. Wojciech Samek, group leader at Fraunhofer HHI said: "We were very surprised by the wide range of learned problem-solving strategies. Even modern AI systems have not always found a solution that appears meaningful from a human perspective, but sometimes used so-called 'Clever Hans Strategies'."
In the early 1900s in Germany, Clever Hans was a well-known Orlov Trotter horse that was claimed to have performed arithmetic and other intellectual tasks. He was considered a scientific sensation until 1907 when a formal investigation by psychologist Oskar Pfungst demonstrated that the horse was not actually performing these mental tasks, but was watching the reactions of his trainer.
While a machine is not Clever Hans, the scientists did find the horse's strategy was used in a number of AI systems. One example is cited by the researchers. "An AI system that won several international image classification competitions a few years ago pursued a strategy that can be considered naïve from a human's point of view. It classified images mainly on the basis of context. Images were assigned to the category "ship" when there was a lot of water in the picture.
Other images were classified as "train" if rails were present. Still other pictures were assigned the correct category by their copyright watermark. The real task, namely to detect the concepts of ships or trains, was therefore not solved by this AI system - even if it indeed classified the majority of images correctly."
The researchers were also able to find similar faulty problem-solving strategies in some state-of-the-art computers, the so-called deep neural networks. The study found that these systems base classification - in part - on artifacts that were created during the preparation of the images. They don't have anything to do with the actual image.
"Such AI systems are not useful in practice. Their use in medical diagnostics or in safety-critical areas would even entail enormous dangers," said Klaus-Robert Müller. "It is quite conceivable that about half of the AI systems currently in use implicitly or explicitly rely on such 'Clever Hans' strategies. It's time to systematically check that so that secure AI systems can be developed."
"Our automated technology is open source and available to all scientists. We see our work as an important first step in making AI systems more robust, explainable and secure in the future, and more will have to follow. This is an essential prerequisite for general use of AI," said Klaus-Robert Müller.
This very interesting study, Unmasking Clever Hans predictors and assessing what machines really learn, was published on March 11, 2019, in Nature Communications.
More about Artificial intelligence, clever Hans, reaching conclusions, true intelligence, lucky guess
Latest News
Top News