Remember meForgot password?
    Log in with Twitter

article imageStudents trick an AI into misclassifying images

By James Walker     Nov 3, 2017 in Technology
A team of AI researchers have found a way to "reliably" trick image recognition algorithms into misclassifying objects. The technique changes a single pixel in images. The AI is then fooled into detecting different objects within the data.
The team of MIT researchers detailed their method in an article this week. They said they'd completed the work to highlight the risks posed by modern AI image recognition systems. In tests of the technology, an AI was tricked into thinking a baseball was an espresso. A turtle was identified as a gun.
Adversarial images
In around 74 percent of the cases, the researchers only had to change a single pixel in the subject image. The attacks work far more reliably than previous "adversarial images," the term used to describe the technique.
It works by filtering the image data with an invisible pattern. This skews the identification procedures used by AI algorithms, leading them to "see" things which aren't there.
Previous adversarial images have tended to be limited in effect. Basic image modifications, such as cropping or zooming the shot, could eliminate their impact altogether, giving AI a way to defend itself against being fooled. The MIT team's pixel-based technique works in 3D and is immune to basic bypass mechanisms. This makes it a much more serious risk to neural networks.
Tricking the computers
The issue poses a major challenge for researchers working with AI. Neural networks are increasingly seeing use in high-risk, real-world systems. Autonomous vehicles will be dependent on accurate image recognition to avoid obstacles and handle on-road incidents.
READ NEXT: Google launches Poly, a marketplace for 3D objects
An adversarial image could be used to trick a self-driving car into seeing a pedestrian or another car, forcing it to take action that could cause a collision. Adversarial images can also be used to thwart facial recognition systems, allowing intruders to bypass security cameras or mask their face.
The MIT team's research applies to all major neural networks available today. Although the group used Google's cloud for its testing, other image classifiers would deliver similar results.
Semantic understanding
There's currently no solution for the problem. The risks could be partially alleviated by giving AIs more semantic awareness of an image's content. They could then consider which object is most likely to appear in a given scenario. If something unusual is detected, such as a gun in an underwater environment instead of a turtle, the image could be sent back for further processing.
This doesn't solve all the problems though, such as the use of adversarial images to trick face detection algorithms where the context is likely to remain the same. More research will be required to ascertain the risk posed by the technique and find a solution that keeps AI secure. MIT warned adversarial mechanisms are already a "practical concern" which could be "dangerous" in the future.
More about Artificial intelligence, Ai, machine learning, neural networks, Security
Latest News
Top News