A team from MIT's Computer Science & Artificial Intelligence Laboratory (CSAIL), is working on a solution to alleged racial bias in face recognition datasets.”Last year's study showing the racism of face-recognition algorithms demonstrated a fundamental truth about AI: If you train with biased data, you'll get biased results,” Adam Conner-Simons, CSAIL communications and media relations officer, wrote in a blog post, citing the Institute's 2018 Gender Shades project.The researchers created a new algorithm that can allegedly automatically “de-bias” data by resampling it to be more balanced.As described by Conner-Simons, the algorithm can learn a specific task, like face detection, as well as the underlying structure of the training data-allowing it to identify and minimize hidden prejudices.In tests based on the same facial-image dataset developed last year by the MIT Media Lab, the algorithm reduced “categorical bias” by more than 60 percent, compared to “state-of-the-art” facial detection models.Most existing approaches to fighting racial biases require some level of human input; they need someone to define specific preferences scientists want it to learn.MIT's breakthrough algorithm, according to Conner-Simons, can simply view a dataset, learn what's hidden inside, and automatically resample it to be more impartial-without the help of a programmer.”The algorithm can learn both a specific task like face detection, as well as the underlying structure of the training data, which allows it to identify and minimize any hidden biases. In tests the algorithm decreased “categorical bias” by over 60 percent compared to state-of-the-art facial detection models – while simultaneously maintaining the overall precision of these systems.” The team evaluated the algorithm on the same facial-image dataset that was developed last year by researchers from the MIT Media Lab.A lot of existing approaches in this field require at least some level of human input into the system to define specific biases that researchers want it to learn. In contrast, the MIT team's algorithm can look at a dataset, learn what is intrinsically hidden inside it, and automatically resample it to be more fair without needing a programmer in the loop.”Facial classification in particular is a technology that's often seen as 'solved,' even as it's become clear that the datasets being used often aren't properly vetted,” says PhD student Alexander Amini, who was co-lead author on a related paper that was presented this week at the Conference on Artificial Intelligence, Ethics and Society (AIES). “Rectifying these issues is especially important as we start to see these kinds of algorithms being used in security, law enforcement and other domains.”