Dive Brief:

  • Top facial recognition systems misidentify people of color at higher rates than white people, according to a federal study from the National Institute of Standards and Technology (NIST).
  • Asian and African American faces were misidentified up to 100 times more frequently than white faces in some algorithms when conducting one-to-one matching​. ​In one-to-many matching (the type of search done by law enforcement investigators that compares an image to a database of other faces)​, African American women were falsely identified most frequently. 
  • The rate of misidentification can lead to false accusations and even security concerns, allowing access to imposters, according to the report. The study assessed 189 software algorithms from 99 developers, representing a majority of the industry. 
The Smart Cities Dive Awards for 2019
The 2019 Dive Awards highlight the people and organizations leading their markets. The achievements and strategies called out in each award represent the future of the industry.
See the awards ➔

Dive Insight:

The findings are sure to throw more momentum behind efforts to regulate or restrict the growing use of facial recognition technology by governments and private businesses, especially in law enforcement. 
More police departments have explored the use of the technology for crowd control or to identify suspects. The Federal Bureau of Investigation (FBI) has run over 390,000 facial recognition searches since 2011, according to The Washington Post. The Department of Homeland Security has also it could use facial recognition at the border and for travelers, although it dropped plansthis month to seek permission to use the technology to scan travelers coming in and out of the country. 
Concerned about the potential civil rights implications, cities like San FranciscoOakland, CA and Somerville, MA passed bans on government use of the software. Portland, OR is weighing a ban that would extend to private businesses as well. But those restrictions might prove more difficult to enforce than previously thought. 
San Francisco is poised to amend its restriction on facial recognition technology, permitting city employees to use devices like Apple products that already have facial recognition built in as an unlocking feature.
Some Democratic lawmakers, who have explored restrictions on the tools, said the NIST report should give users pause. NIST reviewed algorithms from tech giants like Intel and Microsoft, although notably the study does not cover the algorithm Amazon uses in its Rekognition system, which has been marketed to police departments. 
Among U.S.-developed algorithms, there were high rates of false positives in one-to-one matching (like identification that would be used for a cellphone or as identification at a border) for Asians, African Americans and American Indians, with American Indians showing the highest rate of false positives. However, algorithms developed by Asian countries did not show the same high failure rates. 
In a statement, NIST computer scientist Patrick Grother said that finding was an "encouraging sign that more diverse training data may produce more equitable outcomes, should it be possible for developers to use such data." Overall, he said, the report should help policymakers and developers think "about the limitations and appropriate use of these algorithms."