A recent ProPublica analysis of The Princeton Review’s prices for online SAT tutoring shows that customers in areas with a high density of Asian residents are often charged more. When presented with this finding, The Princeton Review called it an “incidental” result of its geographic pricing scheme. The case illustrates how even a seemingly neutral price model could potentially lead to inadvertent bias — bias that’s hard for consumers to detect and even harder to challenge or prove.
Over the past several decades, an important tool for assessing and addressing discrimination has been the “disparate impact” theory. Attorneys have used this idea to successfully challenge policies that have a discriminatory effect on certain groups of people, whether or not the entity that crafted the policy was motivated by an intent to discriminate. It’s been deployed in lawsuits involving employment decisions, housing and credit. Going forward, the question is whether the theory can be applied to bias that results from new technologies that use algorithms.
In the years since, several disparate impact cases have made their way to the Supreme Court and lower courts, most having to do with employment discrimination. This June, the Supreme Court’s decision in Texas Dept. of Housing and Community Affairs v. Inclusive Communities Project, Inc. affirmed the use of the disparate impact theory to fight housing discrimination. The Inclusive Communities Project had used a statistical analysis of housing patterns to show that a tax credit program effectively segregated Texans by race. Sorelle Friedler, a computer science researcher at Haverford College and a fellow at Data & Society, called the Court’s decision “huge,” both “in favor of civil rights…and in favor of statistics.”
So how will the courts address algorithmic bias? From retail to real estate, from employment to criminal justice, the use of data mining, scoring software and predictive analytics programs is proliferating at an exponential rate. Software that makes decisions based on data like a person’s ZIP code can reflect, or even amplify, the results of historical or institutional discrimination.“[A]n algorithm is only as good as the data it works with,” Solon Barocas and Andrew Selbst write in their article “Big Data’s Disparate Impact,” forthcoming in the California Law Review. “Even in situations where data miners are extremely careful, they can still affect discriminatory results with models that, quite unintentionally, pick out proxy variables for protected classes.”
It’s troubling enough when Flickr’s auto-tagging of online photos label pictures of black men as “animal” or “ape,” or when researchers determine that Google search results for black-sounding names are more likely to be accompanied by ads about criminal activity than search results for white-sounding names. But what about when big data is used to determine a person’s credit score, ability to get hired, or even the length of a prison sentence?
Because disparate impact theory is results-oriented, it would seem to be a good way to challenge algorithmic bias in court. A plaintiff would only need to demonstrate bias in the results, without having to prove that a program was conceived with bias as its goal. But there is little legal precedent. Barocas and Selbst argue in their article that expanding disparate impact theory to challenge discriminatory data-mining in court “will be difficult technically, difficult legally, and difficult politically.”
Some researchers argue that it makes more sense to design systems from the start in a more considered and discrimination-conscious way. Barocas and Moritz Hardt established a traveling workshop called Fairness, Accountability and Transparency in Machine Learning to encourage other computer scientists to do just that. Some of their fellow organizers are also developing tools they hope companies and government agencies could use to test whether their algorithms yield discriminatory results and to fix them when necessary. Some legal scholars (including the University of Maryland’sDanielle Keats Citron and Frank Pasquale) argue for the creation of new regulations or even regulatory bodies to govern the algorithms that make increasingly important decisions in our lives.
There still exists “a large legal difference between whether there is explicit legal discrimination or implicit discrimination,” said Friedler, the computer science researcher. “My opinion is that, because more decisions are being made by algorithms, that these distinctions are being blurred.”
Shares