Connect with us

Hi, what are you looking for?

Tech & Science

Students trick an AI into misclassifying images

The team of MIT researchers detailed their method in an article this week. They said they’d completed the work to highlight the risks posed by modern AI image recognition systems. In tests of the technology, an AI was tricked into thinking a baseball was an espresso. A turtle was identified as a gun.
Adversarial images
In around 74 percent of the cases, the researchers only had to change a single pixel in the subject image. The attacks work far more reliably than previous “adversarial images,” the term used to describe the technique.
It works by filtering the image data with an invisible pattern. This skews the identification procedures used by AI algorithms, leading them to “see” things which aren’t there.
Previous adversarial images have tended to be limited in effect. Basic image modifications, such as cropping or zooming the shot, could eliminate their impact altogether, giving AI a way to defend itself against being fooled. The MIT team’s pixel-based technique works in 3D and is immune to basic bypass mechanisms. This makes it a much more serious risk to neural networks.
Tricking the computers
The issue poses a major challenge for researchers working with AI. Neural networks are increasingly seeing use in high-risk, real-world systems. Autonomous vehicles will be dependent on accurate image recognition to avoid obstacles and handle on-road incidents.


READ NEXT: Google launches Poly, a marketplace for 3D objects
An adversarial image could be used to trick a self-driving car into seeing a pedestrian or another car, forcing it to take action that could cause a collision. Adversarial images can also be used to thwart facial recognition systems, allowing intruders to bypass security cameras or mask their face.
The MIT team’s research applies to all major neural networks available today. Although the group used Google’s cloud for its testing, other image classifiers would deliver similar results.
Semantic understanding
There’s currently no solution for the problem. The risks could be partially alleviated by giving AIs more semantic awareness of an image’s content. They could then consider which object is most likely to appear in a given scenario. If something unusual is detected, such as a gun in an underwater environment instead of a turtle, the image could be sent back for further processing.
This doesn’t solve all the problems though, such as the use of adversarial images to trick face detection algorithms where the context is likely to remain the same. More research will be required to ascertain the risk posed by the technique and find a solution that keeps AI secure. MIT warned adversarial mechanisms are already a “practical concern” which could be “dangerous” in the future.

Written By

You may also like:

World

US President Joe Biden delivers remarks after signing legislation authorizing aid for Ukraine, Israel and Taiwan at the White House on April 24, 2024...

World

AfD leaders Alice Weidel and Tino Chrupalla face damaging allegations about an EU parliamentarian's aide accused of spying for China - Copyright AFP Odd...

Business

Meta's growth is due in particular to its sophisticated advertising tools and the success of "Reels" - Copyright AFP SEBASTIEN BOZONJulie JAMMOTFacebook-owner Meta on...

World

Iran's supreme leader Ayatollah Ali Khamenei leads prayers by the coffins of seven Revolutionary Guards killed in an April 1 air strike on the...