State-of-the-art facial recognition technologies are now credited with the feat of total bias mitigation.

NIST, a leading industry standards test for face recognition software, released two research publications since the split of its Face Recognition Vendor Test program into two divisions – The Face Recognition Technology Evaluation (FRTE) and Face Analysis Technology Evaluation (FATE).

The Face Analysis Technology Evaluation (FATE) track is being conducted to support assessment of “quality component algorithms” that implement quality checks of acceptable subject photos. Along with standards for face analysis software, subjects must also adhere to measures that helps algorithms to identify their physical characteristics clearly, such as ensuring a front-facing view, open eyes, a neutral expression, and a neutral background.

The track delivers clear checks on facial photographs that derive from ISO/IEC 19794-5:2011, a standard which established subject and photo-capturing requirements to enrol images into the European Entry-Exit-System.

Facilitating quality tests, both tracks are designed to inform “developers, end users, standards processes, and policy and decision makers” about the technical capabilities of biometric algorithms.

Various frontal and non-frontal images were evaluated in the Part 11 track to assess the software’s ability to detect pose estimation accuracy, while human inspection was used to identify some measures to blur images.

The purpose of the assessment in the FATE track was not to detect blurry images used in a variety of applications, such as applying for a new passport, but rather to assess the face analysis quality of software.

Joyce Yang, who co-published one study, wrote that the 20 tested algorithms had mixed results. reported that the results will contribute to a developing standard — ISO/IEC 29794-5, which outlines the guidelines for quality algorithm checks to detect faults within  images.

Seven algorithms from five developers were tested against 20 measures based on international passport standards.

The algorithms should be capable of detecting false images based on poor photo quality or deliberate alterations or disguises of the person’s physical identity.

The first study, co-wrote by Ngan, addressed the “accuracy of passive face presentation attack detection (PAD) algorithms”, titled Face Analysis Technology Evaluation (FATE) Part 10: Performance of Passive, Software-Based Presentation Attack Detection (PAD) Algorithms. 

A NIST Research Laboratory team also operated a rapid evaluation of 2D imagery, presentation attack instruments in algorithms to detect attacks in still and video imagery of human faces.

This assessment required 82 algorithms volunteered by 45 developers to determine non-attack images versus demonstrable presentation attacks. Nine measures used in presentation attacks were considered, including identity theft and trying to create a brand new legal identity, using masks or holding up a different face photograph.

Some algorithms worked well, but Ngan concluded that no algorithms could detect all types of presentation attack, while combining different algorithms did have an impact on boosted performance.

Read more:

Face Analysis Technology Evaluation (FATE) – Part 10: Performance of Passive, Software-Based Presentation Attack Detection (PAD) Algorithms

Click to access NIST.IR.8491.pdf

Face Analysis Technology Evaluation (FATE) Part 11: Face Image Quality Vector Assessment – Specific Image Defect Detection

Click to access NIST.IR.8485.pdf

NIST will be speaking at Identity Week America 2023, kicking off 3 to 4 October, 2023. 

Speakers are:

  • Ryan Galluzzo, Digital Identity Program Lead, NIST
  • Patrick Grother, Biometrics Evaluator, NIST
  • Andrew Regenscheid, PIV Technical Lead, NIST
  • David Temoshok, Senior Advisor, NIST
  • Mei Ngan, Computer Scientist, NIST