A group of American mathematicians wrote a letter in the trade journal Notices of the American Mathematical Society earlier this month calling on their disciplinary peers to stop working with law enforcement officials on predictive policing software. Such software purports to algorithmically "predict" where crimes will occur before they happen, to help police allocate resources; yet studies have found that such technology tends to exhibit algorithmic bias that reproduces racial inequality.
"In light of the extrajudicial murders by police of George Floyd, Breonna Taylor, Tony McDade and numerous others before them, and the subsequent brutality of the police response to protests, we call on the mathematics community to boycott working with police departments," the letter states. Its authors go on to explain how many mathematicians work with police departments to develop models and data that will ostensibly help prevent crime. They cite as one example how the Institute for Computational and Experimental Research in Mathematics (ICERM) sponsored a workshop on predictive policing.
The authors also express concern over how artificial intelligence, facial recognition technologies and the use of machine learning could exacerbate systemic racism.
"Given the structural racism and brutality in US policing, we do not believe that mathematicians should be collaborating with police departments in this manner," the authors state. "It is simply too easy to create a 'scientific' veneer for racism. Please join us in committing to not collaborating with police. It is, at this moment, the very least we can do as a community."
They urge that any algorithm with a potential high impact receive a public audit and for "mathematicians to work with community groups, oversight boards, and other organizations dedicated to developing alternatives to oppressive and racist practices." Finally, they argue that universities with data science courses "implement learning outcomes that address the ethical, legal, and social implications of these tools."
Predictive policing technology is designed to identify which neighborhoods are supposedly more likely to experience violent crime and which individuals are supposedly more likely to either perpetrate it or be victims of it. Yet research from groups like the Human Rights Data Analysis Group (HRDAG) indicates that predictive policing reinforce racist practices among police officers because it often relies on data that is compromised by racial biases.
"Neighborhoods with lots of police calls aren't necessarily the same places the most crime is happening," William Isaac and Andi Dixon of HRDAG wrote in 2017. "They are, rather, where the most police attention is — though where that attention focuses can often be biased by gender and racial factors." Their research found that "predictive policing vendor PredPol's purportedly race-neutral algorithm targeted black neighborhoods at roughly twice the rate of white neighborhoods when trained on historical drug crime data from Oakland, California. We found similar results when analyzing the data by income group, with low-income communities targeted at disproportionately higher rates compared to high-income neighborhoods." This is in spite of the fact that estimates from public health surveys and population models indicate that illegal drug activity in that city occurs approximately evenly across racial and income groups.
As Matthew Harwood and Jay Stanley from the American Civil Liberties Union observed:
Even if the data underlying most predictive policing software accurately anticipates where crime will indeed occur — and that's a gigantic if — questions of fundamental fairness still arise. Innocent people living in or passing through identified high crime areas will have to deal with an increased police presence, which, given recent history, will likely mean more questioning or stopping and frisking — and arrests for things like marijuana possession for which more affluent citizens are rarely brought in.
Shares