Raymond Fu and Tina Eliassi-Rad Featured in News@Northeastern: Humans Are Trying To Take Bias Out Of Facial Recognition Programs. It’s Not Working – Yet. | Global Resilience Institute

Relying entirely on a computer system’s identification, police in Detroit arrested Robert Julian-Borchak Williams in January of last year. Facial recognition software had matched his driver’s license photo to grainy security video footage from a watch store robbery. There was only one problem: A human eye could tell that it was not Williams in the footage.

What went wrong? Williams and the suspect are both Black, and facial recognition technology tends to make mistakes in identifying people of color.

One significant reason for that is likely the lack of diversity in the datasets that underpin computer vision, the field of artificial intelligence that trains computers to interact with the visual world. Researchers are working to remedy that by providing computer vision algorithms with datasets that represent all groups equally and fairly.

But even that process may be perpetuating biases, according to new research by Zaid Khan, a Ph.D. student in computer engineering at Northeastern University, and his advisor Raymond Fu, professor with the College of Engineering and the Khoury College of Computer Sciences at Northeastern University.

“The story doesn’t start and end with the data. It’s the whole pipeline that matters,” says Tina Eliassi-Rad, a professor in the Khoury College of Computer Sciences at Northeastern University who was not involved in Khan’s study. “Even if your data was aspirational, you’re still going to have issues.”

 

See full article here.