SRI Director David Lie and collaborators awarded $5.6 million for cutting-edge research on robust, secure, and safe AI

 

SRI Director David Lie is leading a team of 18 researchers in a new end-to-end analysis of the AI pipeline—from data acquisition and security to model training, privacy protection, and beyond. Photo by Shelby El Otmani. Courtesy of the Berkman Klein Center for Internet & Society’s Institute for Rebooting Social Media, Harvard University.


SRI Director David Lie and 18 collaborators—including five other SRI researchers—will receive $5.6 million in grants over the next four years to develop solutions for critical artificial intelligence (AI) challenges.

The substantial funding, granted by the Natural Sciences and Engineering Research Council of Canada (NSERC) and Communications Security Establishment Canada (CSE), is earmarked for cutting-edge research on robust, secure, and safe AI.

The grant is part of a new partnership established in August 2023 between NSERC and CSE, aimed at bolstering research focused on four topics of strategic importance to CSE and the Government of Canada.

Lie's project, "An End-to-End Approach to Safe and Secure AI Systems," is the first of the four that will aim to create methods to train AI models in situations where reliable data is unavailable, develop techniques to ensure AI models are robust, fair, and interpretable, and establish guidelines for AI use to ensure regulatory compliance.

Lie will lead a community of 18 researchers hailing from four Canadian universities: the University of Toronto, Concordia University, the University of Waterloo, and York University. Five of the researchers are affiliated with SRI: Associate Director Sheila McIlraith, Research Lead Nisarg Shah, and Faculty Affiliates Marsha Chechik, Igor Gilitschenski, and Nicolas Papernot.

Lie says with many researchers representing diverse areas of expertise, the project is ambitious in its scope and content.

“Our work will cover everything from privacy to interpretability to auditing the formal verification to data cleaning and management,” says Lie. “So it's not just everything in AI, but it's the entire AI pipeline—from where you acquire data to make sure that it hasn't been tampered with, all the way through training to make sure that the training is protecting privacy.”

Lie says this substantial financial support of research particularly in the area of AI safety is very encouraging, and provides a unique opportunity in today’s context. As AI develops rapidly, experts around the world have increasingly stressed the importance of creating sound mechanisms and institutions to prevent misuse and mitigate risks.

In May of 2024, 25 experts including McIlraith, SRI advisory board member Geoffrey Hinton, and SRI Faculty Affiliates Gillian Hadfield and Tegan Maharaj published a major collaborative paper in Science ahead of the AI Safety Summit in South Korea. The paper highlighted the world's lack of preparedness for AI risks and called for stronger actions in R&D and governance measures.

Traditionally, CSE does not provide a lot of funds for academic research. So the fact that this program exists and has so many resources for a crucial area says a lot. It’s really good for Canada in general.
— David Lie

The paper emphasizes that AI systems can cause harm by eroding social justice and stability, inciting large-scale criminal activity, and facilitating automated warfare. These risks are expected to only amplify as companies work to further develop autonomous AI.

“AI safety research is lagging,” according to the paper. “Humanity is pouring vast resources into making AI systems more powerful but far less into their safety and mitigating their harms. Only an estimated 1 to 3 per cent of AI publications are on safety.”

To Lie, the grant marks a crucial milestone in advancing AI safety research in Canada. 

“Traditionally, CSE does not provide a lot of funds for academic research. So the fact that this program exists and has so many resources for a crucial area says a lot. It's really good for Canada in general,” he says.

Lie takes on SRI leadership in its next phase of growth

Lie is the newly appointed director of SRI. Coming from a multidisciplinary background that covers computer engineering, data science, law, and policy, he envisions SRI as an institute that will see more discussions around AI safety in its next phase while maintaining an interdisciplinary approach.

A professor at the Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE) in the Faculty of Engineering at the University of Toronto, Lie is also a Tier 1 Canada Research Chair in Secure and Reliable Systems. He also holds cross-appointments in U of T’s Department of Computer Science and Faculty of Law, and is a faculty affiliate at the Vector Institute.

His intersectional research stems from his interest in security and privacy on smartphones. Lie’s ongoing collaboration with SRI Associate Director Lisa Austin, a professor in U of T’s Faculty of Law, has yielded crucial insights into the crossroads between technical and legal frameworks.

“We wanted to understand if all these applications you download, which come with a privacy policy, really comply with their own privacy policy,” he said. “To do this, we had to take an interdisciplinary approach because we couldn’t interpret the privacy policies without legal expertise. It was really that collaboration that jump-started this whole thing.”

Lie has extensive experience in collaborative research and production, having worked with small start-ups and big tech firms like Google as well as researchers in public policy and law.

“One thing I bring to SRI is experience and knowledge in how to set up and create interdisciplinary collaboration,” he said.

His research goal is to make computer systems more secure and trustworthy. To achieve this goal, he has employed a variety of approaches, including computer architecture, formal verification, operating systems techniques, and networking. As AI safety becomes increasingly important, Lie's expertise in cybersecurity will be a crucial asset to this field of study.

Lie sees AI safety as one of the areas that especially requires an interdisciplinary approach because the divides are very deep—from the vocabulary and terminology people use to the ways in which they evaluate positive or negative outcomes, and the ways in which they think about and approach problems. SRI is uniquely positioned to address this challenge. 

“We have technical people working literally alongside policy people, social science people, and philosophers. You're really just not going to find that anywhere else. And I think it's crucial that we do more of this to solve humanity's large problems—and safety is one of them.”

 

Want to learn more?

 


Browse stories by tag:

Related Posts

 
Previous
Previous

The smart way to run smart cities: New report explores data governance and trusted data sharing in Toronto

Next
Next

Harming virtuously? Value alignment for harmful AI