Back to All Events

SRI Seminar Series: Nicolas Papernot, “What does it mean for machine learning to be trustworthy?”

Our weekly seminar series welcomes Schwartz Reisman Faculty Affiliate Nicolas Papernot, assistant professor in the Department of Electrical and Computer Engineering and the Department of Computer Science at the University of Toronto. Papernot’s research explores the intersection of security, privacy, and machine learning (ML).

Talk title

“What does it mean for machine learning to be trustworthy?”

Abstract

The attack surface of machine learning (ML) is large: training data can be poisoned, predictions manipulated using adversarial examples, models exploited to reveal sensitive information contained in training data, etc. This is in large parts due to the absence of security and privacy considerations in the design of ML algorithms.

Yet, adversaries have clear incentives to target these systems. Thus, there is a need to ensure that computer systems that rely on ML are trustworthy. Fortunately, we are at a turning point where ML is still being adopted, which creates a rare opportunity to address the shortcomings of the technology before it is widely deployed. Designing secure ML requires that we have a solid understanding as to what we expect legitimate model behavior to look like.

We structure our discussion around two directions, which we believe are likely to lead to significant progress. The first encompasses a spectrum of approaches to verification and admission control, which is a prerequisite to enable fail-safe defaults in machine learning systems. The second pursues formal frameworks for security and privacy in machine learning, which we argue should strive to align machine learning goals such as generalization with security and privacy desiderata like robustness or privacy. We illustrate these directions with recent work on adversarial examples, privacy-preserving ML, machine unlearning, and deepfakes.


About Nicolas Papernot

Nicolas Papernot is an Assistant Professor in the Department of Electrical and Computer Engineering and the Department of Computer Science at the University of Toronto. He is also a faculty member at the Vector Institute where he holds a Canada CIFAR AI Chair, and a faculty affiliate at the Schwartz Reisman Institute.

Papernot’s research interests are at the intersection of security, privacy, and machine learning. He earned his Ph.D. in Computer Science and Engineering at the Pennsylvania State University, working with Prof. Patrick McDaniel and supported by a Google PhD Fellowship. Upon graduating, he spent a year working at Google Brain.


About the SRI Seminar Series

The SRI Seminar Series brings together the Schwartz Reisman community and beyond for a robust exchange of ideas that advance scholarship at the intersection of technology and society. Seminars are led by a leading or emerging scholar and feature extensive discussion.

Each week, a featured speaker will present for 45 minutes, followed by 45 minutes of discussion. Registered attendees will be emailed a Zoom link approximately one hour before the event begins. The event will be recorded and posted online.

Nicolas Papernot earned his PhD in computer science and engineering at Pennsylvania State University. He is also a faculty member at the Vector Institute where he holds a Canada CIFAR AI Chair.

Nicolas Papernot

Previous
Previous
December 9

SRI Seminar Series: Travis LaCroix, “The tragedy of the AI commons”

Next
Next
January 27

SRI Seminar Series: Avery Slater, “Latent traits: AI and the new psychometrics”