Professor Parham Aarabi and graduate student Avishek Bose are using “neural net based constrained optimization” to disrupt face detection software.The University of Toronto researchers used existing knowledge of detection software that says “small, often imperceptible, perturbations can be added to images to fool a typical classification network into misclassifying them.” Their dynamic “attack” algorithm “produc[es] small perturbations that, when added to an input face image, causes the pre-trained face detector to fail.”Aarabi and Bose designed two different, opposing neural networks – one that attempts to identify faces and the other that works to “disrupt” that identification – using 'adversarial training', a deep learning technique that puts two opposing AI algorithms in a sort of digital cage match.The 'privacy filter' is essentially “Instagram-like” in the sense that it can be overlayed on photos and it changes “very specific pixels” in the photo to fool the first AI working to detect a face.”The disruptive AI can 'attack' what the neural net for the face detection is looking for,” said Bose to U of T Engineering News. “If the detection AI is looking for the corner of the eyes, for example, it adjusts the corner of the eyes so they're less noticeable. It creates very subtle disturbances in the photo, but to the detector they're significant enough to fool the system.”In their testing of this algorithm the pair was able “to reduce the number of detected faces to 0.5 per cent.” Currently this is not available to the public, but the duo hopes to make that their next move.