Back to All Events

SRI Seminar Series: Arvind Narayanan, “Resistance or harm reduction?”

Our weekly SRI Seminar Series welcomes Arvind Narayanan, a professor of computer science at Princeton University and the director of the Center for Information Technology Policy. Narayanan led the Princeton Web Transparency and Accountability Project to uncover how companies collect and use personal information, and his work was among the first to show how machine learning reflects cultural stereotypes. He is a co-author of Fairness and Machine Learning: Limitations and Opportunities (MIT Press, 2023), and is currently co-authoring a book on AI snake oil.

In this talk, Narayanan will explore how reactions to discrimination and errors in social applications for machine learning often seek to either resist its use—a position often taken by activists—or seek to mitigate its harmful effects—a position that is often a default for researchers. As Narayanan observes, both responses have shortcomings: while mitigation can risk entrenching unjust technologies, resistance risks missing out on AI’s social benefits. Exploring several case studies regarding the social impacts of recent technologies—including generative AI, risk prediction, facial recognition, and social media—Narayanan will show why correct responses to the shortcomings of AI are necessarily dependent on context and specifics.

Talk title:

“Resistance or harm reduction?”

Abstract:

Machine learning in social settings is often error-prone and discriminatory. So a recurring question is whether to resist its use or to work on mitigating its harmful effects. Activists tend to default to resistance while researchers default to harm mitigation. But neither approach is always the right answer. Research on harm mitigation risks entrenching and legitimizing intrinsically unjust technologies, while resistance risks missing out on potential benefits to the very populations that activists aim to serve and represent. In this talk, I will argue that the correct course of action is highly dependent on the particulars of the application. I will illustrate this through four case studies: generative AI, risk prediction, face recognition, and social media platforms experimenting on their users.


About Arvind Narayanan

Arvind Narayanan is a professor of computer science at Princeton University and the director of the Center for Information Technology Policy. He co-authored a textbook on fairness and machine learning, Fairness and Machine Learning: Limitations and Opportunities (forthcoming in print from MIT Press in 2023), and is currently co-authoring a book on AI snake oil.

Narayanan led the Princeton Web Transparency and Accountability Project to uncover how companies collect and use our personal information. His work was among the first to show how machine learning reflects cultural stereotypes, and his doctoral research showed the fundamental limits of de-identification. Narayanan is a recipient of the Presidential Early Career Award for Scientists and Engineers (PECASE), twice a recipient of the Privacy Enhancing Technologies Award, and thrice a recipient of the Privacy Papers for Policy Makers Award.


About the SRI Seminar Series

The SRI Seminar Series brings together the Schwartz Reisman community and beyond for a robust exchange of ideas that advance scholarship at the intersection of technology and society. Seminars are led by a leading or emerging scholar and feature extensive discussion.

Each week, a featured speaker will present for 45 minutes, followed by an open discussion. Registered attendees will be emailed a Zoom link before the event begins. The event will be recorded and posted online.

Arvind Narayanan

Previous
Previous
October 20

Book launch: “We, The Data” with Wendy H. Wong

Next
Next
October 27

Visionary Thinkers: Geoffrey Hinton, “Will digital intelligence replace biological intelligence?”