http://www.digitaljournal.com/tech-and-science/technology/researchers-create-photo-filter-that-disables-facial-recognition/article/523620

Researchers create photo filter that disables facial recognition

Posted Jun 1, 2018 by Lisa Cumming
AI researchers from University of Toronto Engineering have created a 'privacy filter' that "disrupts" facial recognition software.
Some Chinese police have been using high-tech sunglasses that use facial recognition technology to s...
Some Chinese police have been using high-tech sunglasses that use facial recognition technology to spot suspects in crowded areas
-, AFP
By way of an algorithm, Professor Parham Aarabi and graduate student Avishek Bose are using "neural net based constrained optimization" to disrupt face detection software, like what's commonly seen every time you post a photo on social media.
What they did is build off of the existing knowledge of detection software that says "small, often imperceptible, perturbations can be added to images to fool a typical classification network into misclassifying them." Their dynamic "attack" algorithm "produc[es] small perturbations that, when added to an input face image, causes the pre-trained face detector to fail."
Aarabi and Bose designed two different, opposing neural networks — one that attempts to identify faces and the other that works to "disrupt" that identification — using 'adversarial training', a deep learning technique that puts two opposing AI algorithms in a sort of digital cage match.
The 'privacy filter' is essentially "Instagram-like" in the sense that it can be overlayed on photos and it changes "very specific pixels" in the photo to fool the first AI working to detect a face.
“The disruptive AI can ‘attack’ what the neural net for the face detection is looking for,” said Bose to U of T Engineering News. “If the detection AI is looking for the corner of the eyes, for example, it adjusts the corner of the eyes so they’re less noticeable. It creates very subtle disturbances in the photo, but to the detector they’re significant enough to fool the system.”
In their testing of this algorithm the pair was able "to reduce the number of detected faces to 0.5 per cent." Currently this is not available to the public, but the duo hopes to make that their next move.