The accuracy of face recognition systems can be skewed by the racial attributes of training data used for their calibration, say experts.Alice O'Toole, head of the face perception research lab at University of Texas at Dallas, has told MIT's Technology Review that face matching systems can be unintentionally produce different results depending on the data-set.”If your training set is strongly biased toward a particular race, your algorithm will do better recognizing that race” O'Toole told the magazine.O'Toole says that her and colleagues found in 2011 that an algorithm developed in Western countries was better at recognizing Caucasian faces than it was at recognizing East Asian faces. Likewise, East Asian algorithms performed better on East Asian faces than on Caucasian ones.A 2011 study, co-authored by one of the organisers of NIST's vendor tests, found that algorithms developed in China, Japan, and South Korea recognized East Asian faces far more readily than Caucasians. The reverse was true for algorithms developed in France, Germany, and the United States, which were significantly better at recognizing Caucasian facial characteristics. This suggests that the conditions in which an algorithm is created-particularly the racial makeup of its development team and test photo databases-can influence the accuracy of its results.Similarly, a study conducted in 2012 that used a collection of mug shots from Pinellas County, Florida to test the algorithms of three commercial vendors also uncovered evidence of racial bias.
Select Page















