Facial recognition is becoming more pervasive in consumer products and law enforcement, backed by increasingly powerful machine-learning technology. But a test of commercial facial-analysis services from IBM and Microsoft elevates concerns that the systems scrutinizing our features are significantly less accurate for people with black skin.
Researchers tested features of Microsoft and IBM’s face-analysis services that are supposed to identify the gender of people in photos. The companies’ algorithms demonstrated almost perfect at identifying the gender of men with lighter skin, but often strayed when analyzing images of women with dark skin.
The skewed accuracy appears to be due to underrepresentation of darker skin tints in the training data used to create the face-analysis algorithms.
The disparity is the latest example in a developing collection of flubs from AI systems that seem to have picked up societal biases around certain groups. Google’s photo-organizing service still censors the search words “gorilla” and “monkey” after an accident nearly three years ago in which algorithms labelled black people as gorillas, for example. The question of how to ensure machine-learning systems deployed in consumer products, commercial systems, and government programs has become a major topic of discussion in the field of AI.
A 2016 report from Georgetown described wide-ranging, largely unregulated deployment of facial acceptance by the FBI, as well local and state police forces, and evidence the systems in use were less accurate for African-Americans.