A team of scientists from the University of California, Berkeley have conducted a study that shows that it is possible to construct images that a well-known face detection algorithm recognizes as containing faces – yet no human would consider a face.The team has created patterns that tricks the Viola-Jones 2D face detection algorithm, which accepts a grayscale image and produces as output a boolean value, indicating whether the image is a face or not.The study by Michael McCoyd and David Wagner, called “Spoofing 2D Face Detection: Machines See People Who Aren't There”, includes an approach that uses a feedback-guided search algorithm to construct an image that Viola-Jones recognizes as a face, yet is unlikely to be recognized by a human.”We have shown that deliberate spoof images can be created that do not appear to humans as faces, yet Viola-Jones often detects as faces, even after passing through a simulated physical world”, the team concludes.While the spoofing certainly wouldn't fool a human, it may be used as an attack on automated systems.”Why study facial detection, rather than facial recognition? Ultimately, it is the ability to fool facial recognition that matters. However, facial detection algorithms are simpler to study. In this paper we focus on the security properties of facial detection ߪ We view this as a first step towards thelonger-term goal of analysing facial recognition.”
Select Page















