A new research project called Fawkes from the University of Chicago Department of Computer Science provides a powerful new protection mechanism against facial recognition.The software tool “cloaks” photos to trick the deep learning computer models that power facial recognition, without noticeable changes visible to the human eye, noted a write-up in the University's newspaper.The tool targets unauthorized use of personal images, and has no effect on models built using legitimately obtained images, such as those used by law enforcement.”It's about giving individuals agency,” said Emily Wenger, a third-year PhD student and co-leader of the project with first-year PhD student Shawn Shan, told the newspaper.”We're not under any delusions that this will solve all privacy violations, and there are probably both technical and legal solutions to help push back on the abuse of this technology. But the purpose of Fawkes is to provide individuals with some power to fight back themselves, because right now, nothing like that exists.”The technique builds off the fact that machines “see” images differently than humans. To a machine learning model, images are simply numbers representing each pixel, which systems known as neural networks mathematically organise.Tthese models can use these unique features to identify the person in new photos, a technique used for security systems.For Fawkes-named for the Guy Fawkes mask used by revolutionaries in the graphic novel V for Vendetta-Wenger and Shan with collaborators Jiayun Zhang, Huiying Li, and UChicago Professors Ben Zhao and Heather Zheng exploit this difference between human and computer perception to protect privacy. By changing a small percentage of the pixels to dramatically alter how the person is perceived by the computer's “eye,” the approach taints the facial recognition model, such that it labels real photos of the user with someone else's identity. But for a human observer, the image appears unchanged.In a paper that will be presented at the USENIX Security symposium next month, the researchers found that the method was nearly 100 percent effective at blocking recognition by state-of-the-art models from Amazon, Microsoft and other companies. While it can't disrupt existing models already trained on unaltered images downloaded from the internet, publishing cloaked images can eventually erase a person's online “footprint,” the authors said, rendering future models incapable of recognizing that individual.”In many cases, we do not control all the images of ourselves online; some could be posted from a public source or posted by our friends,” Shan said. “In this scenario, Fawkes remains successful when the number of cloaked images outnumber that of uncloaked images. So for users who already have a lot of images online, one way to improve their protection is to release even more images of themselves, all cloaked, to balance out the ratio.”