Amid a national reckoning over racial inequality, companies are increasingly moving to scrutinize bias in their products and practices. In the tech sector, leading companies are rethinking their relationships with law enforcement, particularly when it comes to facial recognition software.
IBM, Amazon, and Microsoft have announced plans to curtail the sale of facial recognition software to police departments. IBM has announced it will exit the business entirely, whereas Amazon and Microsoft are delaying or ceasing police contracts.
Critics of facial recognition software point out that the misidentifications occur more frequently with people of color, especially Black women. According to research from the Algorithmic Justice League, the lack of diversity in the datasets that drive this technology may be responsible for these mistakes. With Black Americans disproportionately represented in criminal databases due to decades of biased policing, such inaccuracies have the potential to obstruct justice and endanger innocent lives.
The recent announcements represent necessary steps toward more responsible use of AI technology. Yet Amazon’s and Microsoft’s new policies don’t extend to federal agencies, and also don’t cover other surveillance and predictive policing products. Some advocates are calling for more extensive bans or recalls. Many tech companies, for their part, are publicly supporting government regulation in this space.
The recent debate over police use of facial recognition occurs alongside discussion about the privacy implications of the use of technology in test-and-trace efforts combatting the spread of COVID-19. The greater impact of the pandemic on communities of color in the U.S. demonstrates how these conversations are deeply intertwined.
While support for regulation and more robust internal policies are a start, tech companies must consider the ethical implications of the use of their products more deeply.
Innovation and disruption cannot come at the cost of accountability and equal justice.