A team of scientists from the Chinese University of Hong Kong, Indiana University, and Alibaba Inc. have created a baseball cap that can reliably fool facial recognition software.The researchers laced the inside of a cap with tiny LEDs that projected infrared dots onto “strategic spots” on the wearer's face to subtly alter their features, in a technique known as “adversarial learning.” The device made a facial-recognition system called FaceNet misidentify its targets as various public figures (including musician Moby and Korean politician Lee Hoi-chang) 70 percent of the time. The tactic lets the attacker specify which face the categorizer should “see” — the researchers were able to trick the software into recognizing arbitrary faces.Their experiment draws on the body of work on adversarial examples: blind spots in machine-learning models that can be systematically discovered and exploited to confuse these classifiers.The gadget used in their attack is not readily distinguishable from a regular ball-cap, and the attack only needs a single photo of the person to be impersonated in order to set up the correct light patterns. It worked with a 70% success rate used in a “white-box” attack (where the classifier is well understood), and they believe they could migrate this to a “black-box” attack (where the classifier's workings are a secret) using a technique called “Particle Swarm Optimization.”In this paper, we discovered that infrared can be used by attackers to either dodge or impersonate someone against machine learning systems. To prove the severeness, we developed an algorithm to search adversarial examples. Besides, we designed an inconspicuous device to implement those adversarial examples in the real world. As show cases, some photos were selected from the LFW data set as hypothetical victims. We successfully worked out adversarial examples and implemented those examples for those victims. What's more, we conducted a large scale study among the LFW data set, which showed that for a single attacker, over 70% of the people could be successfully attacked, if they have some similarity.Based on our findings and attacks, we conclude that face recognition techniques today are still far from secure and reliable when being applied to critical scenarios like authentication and surveillance. Researchers should pay more attention to the threaten from infrared
Select Page















