Facial recognition systems are increasingly common, but a staggering amount of research suggests many of these technologies have bias barriers to overcome before that happens. For Amazon, the hurdles could constitute as a threat to civil rights — specifically, gender and racial bias.
Amazon's deep-learning software, called Rekognition, has emerged as a leader in the field of facial recognition. But according to a study published this week by the MIT Media Lab, researchers found the technology only excelled when identifying the gender of lighter-skinned men. According to the study, it had trouble discerning gender and skin tone when the individual was a woman, or had darker colored skin. Specifically, in the tests, Rekognition misidentified women for men 19 percent of the time, and darker-skinned women for men 31 percent of the time. It made no mistakes when identifying men with light skin. Researchers compared Amazon’s technology to the facial recognition technologies from IBM and Microsoft and found Amazon’s to be the least accurate.
Last year, studies found similar problems in the facial recognition technology built by IBM, Microsoft and Megvii. Following the studies, Microsoft and IBM both promised they would improve their software. Joy Buolamwini co-authored both studies; she and the co-authors emphasized the need for policy and regulation to prevent abuse of the technology.
“Consequently, the potential for weaponization and abuse of facial analysis technologies cannot be ignored nor the threats to privacy or breaches of civil liberties diminished even as accuracy disparities decrease,” the authors wrote. “More extensive explorations of policy, corporate practice and ethical guidelines is thus needed to ensure vulnerable and marginalized populations are protected and not harmed as this technology evolves.”
Amazon spokesperson Matt Wood, a general manager of artificial intelligence at Amazon Web Services, provided various media outlets with a statement about the study, saying it is “not possible to draw a conclusion on the accuracy of facial recognition for any use case — including law enforcement — based on results obtained using facial analysis,” arguing the paper does not depict how a customer would use the technology.
Many civil rights groups have been asking Amazon to stop selling the technology to law enforcement and the government, a request that is likely to be reiterated following this report.
In May 2018, the American Civil Liberties Union led an initiative to ask Amazon to stop selling its technology to law enforcement and other government agencies. The letter read, in part:
Amazon also encourages the use of Rekognition to monitor ‘people of interest,’ raising the possibility that those labeled suspicious by governments—such as undocumented immigrants or Black activists—will be targeted for Rekognition surveillance. Amazon has even advertised Rekognition for use with officer body cameras, which would fully transform those devices into mobile surveillance cameras aimed at the public.
“Amazon Rekognition is primed for abuse in the hands of governments,” the group continued in their letter. “This product poses a grave threat to communities, including people of color and immigrants, and to the trust and respect Amazon has worked to build.”
Last week, shareholders put pressure on Amazon, asking the company to stop selling the technology to the government. A resolution was filed in an effort organized by the nonprofit Open MIC representing $1.32 billion in assets under management.
“It’s a familiar pattern: a leading tech company marketing what is hailed as breakthrough technology without understanding or assessing the many real and potential harms of that product,” Michael Connor, Executive Director of Open MIC, said in a statement. “Sales of Rekognition to government represent considerable risk for the company and investors. That’s why it’s imperative those sales be halted immediately.”
The results of the MIT study are scheduled to be presented at the Association for the Advancement of Artificial Intelligence’s conference on Artificial Intelligence, Ethics, and Society next week.
Shares