Mitigating bias in algorithmic decision-making calls for an interdisciplinary effort

 

Machine learning (ML) is increasingly used for producing automated decisions throughout society. While ML offers the promise of scale and efficiency, it runs the risk of codifying biases—such as racism and sexism—in its decisions. As discussed in the Absolutely Interdisciplinary conference session “Fairness in Machine Learning,” understanding and mitigating this risk will require a team effort from scholars across many disciplines.


Whether you’re applying for a new job using an online form, commenting on a developing news story on social media, or simply checking your email, chances are likely that you interact with automated decision-making systems as part of your daily routine. To examine the societal impacts of this trend, scholars convened at the Schwartz Reisman Institute for Technology and Society’s inaugural conference, Absolutely Interdisciplinary 2021, focused their discussion on the role of machine learning algorithms.

Machine learning (ML) is increasingly responsible for producing automated decisions through a process of “training” on datasets comprising decisions previously made by humans. While ML offers the promise of scale and efficiency, it runs the risk of codifying structural societal bias, such as racism and sexism, in its decisions. As discussed in the conference session “Fairness in Machine Learning,” understanding and mitigating this risk will require a team effort from scholars across many disciplines.

Consider the application of ML to hiring, where a ML decision rule—referred to colloquially as a “model”—might be used by a company to filter a pool of initial job candidates before interviews are scheduled and conducted. On the one hand, use of the ML model could allow a company to consider more applicants, potentially increasing the diversity of the applicant pool. On the other hand, if the model filters out applicants based on proxies for sex or race, the model could decrease diversity, and even become a legal liability for the company.

Machine learning techniques have made significant progress recently on perceptual tasks such as image classification and speech recognition, with state-of-the-art models realizing superhuman levels of accuracy in some cases. But automating human judgement, as in the hiring example, is not the same as automating human perception. Not only is evaluating a job candidate a subjective exercise—as the candidate’s previous experience is compared against the needs and values of the company—but the company must comply with anti-discrimination laws throughout its hiring process.

How can machine learning better integrate fairness principles?

With this type of compliance in mind, various approaches for learning “fair” models have been recently proposed by the ML research community. Statistical fairness approaches seek to constrain a model’s decision according to a “fairness criterion.” For example, the criterion of “predictive parity” could require that, after the automated screening process, the remaining applicant pool has the same number of men as women.

As Moritz Hardt, an assistant professor in electrical engineering and computer science at UC Berkeley, explained, early research in fair ML resulted in a plethora of such fairness criteria, many of which are fundamentally incompatible with each other. An alternative fairness criterion for the hiring problem would be to attain parity between men and women among those screened out of the hiring process by the algorithm. If more women apply than men, it is impossible to find a model that achieves both notions of “fairness” at once.

Fair ML saw an explosion of interest in the 2010s, as researchers published extensively on technical fairness criteria and their incompatibilities, with the hope that a deeper technical understanding would contribute to the fight against algorithmic discrimination. However, a lack of consensus in the technical literature was exploited by companies seeking to avoid public scrutiny of their AI products. As research from Hardt’s lab shows, even unconstrained ML algorithms can be interpreted as affording a specific kind of fairness notion, which is closely related to the model’s calibration properties (whether its predictions are appropriately confident).

The use of fair ML rhetoric to distract from legitimate public scrutiny is sometimes called “fairwashing” (or “AI-ethics washing”). Hardt calls the over-prioritization of technical insights by ML researchers an “unforced own goal,” which allowed companies to fairwash by building a plausible deniability argument against algorithmic discrimination claims. By cherry picking a fairness objective that their model does happen to satisfy (e.g. calibration), they dissuade critics who point out that the model doesn’t satisfy some other notion (e.g. predictive parity).

This was precisely the strategy used by Northpointe, Inc. to defend its much-maligned COMPAS tool for “risk assessment” of defendants awaiting a pre-trial sentencing decision. In response to demonstrations that their tool routinely scored black defendants as higher risk than white defendants, Northpointe argued that their tool should in fact be considered “fair” due to its favorable calibration properties.

 

Moritz Hardt (left) and Deirdre Mulligan (right) presented at Absolutely Interdisciplinary’s “Fairness in Machine Learning” session.

 

How can work on algorithmic discrimination avoid the traps of fairwashing? Deirdre Mulligan, a professor in the School of Information at UC Berkeley, noted the current tendency for “engineering logics” to dominate discussions around measuring and regulating algorithmic harms, a focus that sometimes displaces ethical or legal concerns. Designing equitable algorithms requires the early adoption of multidisciplinary critiques, rather than applying purely technical fixes to existing methods.

The COMPAS tool was designed to predict whether defendants would show up for trial. But is this really the type of decision we want to automate? Policy interventions such as providing social support for defendants can help reduce the underlying social issue: failure to show up for court. In fact, a successful lawsuit by activists and lawyers in Houston’s Harris County won free social and transportation services for poor defendants in misdemeanor cases.

Those gathered at the Absolutely Interdisciplinary session acknowledged that while ML clearly plays a central role in modern AI technologies, the path towards developing and deploying equitable algorithms should not consist of merely applying “fairness” fixes to current ML methods. Instead, insights and critiques from outside of the technical research community will be needed at every step, and contributions from a wide range of disciplines beyond the limits of computer science will have an essential role to play.

Want to learn more?


About the author

Elliot Creager is a PhD Candidate at the University of Toronto and the Vector Institute, and a graduate fellow at the Schwartz Reisman Institute for Technology and Society. He works on a variety of topics within machine learning, especially in the areas of algorithmic bias, robustness, and representation learning. He was previously an intern and student researcher at Google Brain in Toronto.


Browse stories by tag:

Related Posts

 
Previous
Previous

The shape of the future: How will technology transform our lives?

Next
Next

Gillian Hadfield: How do we build trust in the age of Big Tech?