Past injustice and future harm: Deborah Hellman on the stakes of algorithmic decision-making

 
Deborah Hellman, professor of law at the University of Virginia, spoke at the Schwartz Reisman weekly Seminar Series about the ways in which algorithmic decision-making can exacerbate the already-present possibility of “compounding injustice.”

Deborah Hellman, professor of law at the University of Virginia, spoke at the Schwartz Reisman weekly Seminar Series about the ways in which algorithmic decision-making can exacerbate the already-present possibility of “compounding injustice.”


The injustice that AI decision-making systems could end up perpetrating has been a topic of great concern and debate.

However, most of the discussion so far has focused on injustices caused by a system that inaccurately classifies members of certain groups. For example, the ProPublica expose on the COMPAS algorithm for parole decisions focused on the fact that Black defendants were more likely to be falsely identified as high-risk for recidivism, while white defendants where more likely to be falsely identified as low-risk.

This is an example of what Deborah Hellman, professor of law at the University of Virginia, calls an “accuracy-affecting injustice”—the injustice lies in the relative (in)accuracy of the algorithm for Black and white defendants.

Hellman was a guest speaker at the Schwartz Reisman Institute’s weekly seminar series on October 21, 2020 and her talk, Big data and compounding injustice,” drew attention to a different form of injustice that she calls “non-accuracy affecting injustice.” These are cases where it is unjust to base a decision on certain kinds of evidence even when this evidence is entirely accurate.

Hellman gives the example of a woman who has been subject to domestic abuse and is now looking to get life insurance. Domestic abuse statistically increases her chances of dying (even more the case for people who leave their abusive partner), so the life insurance company is incentivized to charge a higher rate to this woman due to the increased risk of having to pay out. It seems unjust, however, for victims of abuse to be charged higher rates. And this is not because the insurance companies are biased or using stereotypes—the correlation is a real one, and so the higher rate is perfectly “accurate.”

But it still seems unjust. This is a demonstration of the kind of non-accuracy affecting injustice that Hellman wants to bring to our attention.

To capture our moral intuitions in cases like this, Hellman proposes the “Anti-Compounding Injustice Principle,” or ACI. The idea is that if our action would increase the amount of harm someone has suffered as a result of an injustice previously done to them, then this provides a reason to refrain from that action. So, in the case of the woman suffering abuse, the higher rate on life insurance would be causing a further financial harm to her on the basis of exactly the injustice she has already suffered. That would be a direct case of compounding injustice.

There are also indirect cases, where the injustice itself is not used to justify unfavourable treatment, but a closely related trait caused by the injustice is. As an example, Hellman gives the case of denying a woman a loan not due to her sex or gender, but due to her low income. However, her low income is itself a result of her being a woman, and being paid less as a result. Denying this loan would be a case of indirectly compounding injustice.

In both kinds of cases, Hellman thinks we have a reason to avoid compounding injustice.

In direct relation to the work of the Schwartz Reisman Institute to ensure powerful technologies like artificial intelligence are safe, fair, and ethical, Hellman then raised the notion that both humans and algorithms can both compound injustice.

However, the vastly increased use of large amounts of data (so-called “big data”) in algorithmic decision-making makes it possible to compound injustice more broadly than ever before. With advanced data analysis tools like machine learning, “what is new is the scope of these problems,” says Hellman.

“It's not that it wasn’t a moral problem before, but it becomes one that is more insistent now.”

The effects of injustice are pervasive, and so much of the data incorporated into decision-making will itself be the result of past injustices. Most importantly: since these are not cases of inaccurate predictions, the work by computer scientists to improve the accuracy of algorithms will not provide a solution.

So what can we do about this? Hellman doesn’t think we should never compound injustice—in some cases, the reasons to do so will outweigh the reasons not to. Even in these cases, however, recognizing our reasons for avoiding the compounding of injustice can still help shape our policy and research—by, for example, promoting the search for traits that are not influenced by past injustices for use in algorithmic prediction. Hellman wants those working to build fairer AI to look past accuracy and recognize other ways that the use of big data can perpetuate injustice against historically disadvantaged groups. 

Want to learn more?


Browse stories by tag:

Related Posts

 
Previous
Previous

Announcing the inaugural cohort of Schwartz Reisman faculty and graduate fellows

Next
Next

Rules for a Flat World: A Q&A with Gillian K. Hadfield