Research

Integrating research across traditional boundaries to deepen our knowledge of technologies, societies, and what it means to be human.

 


Research Overview

Our Research Stream is developing new methods for exploring the ways technology, systems, and society interact.

The Schwartz Reisman Institute supports ground-breaking interdisciplinary research on the social impacts of advanced technologies by bringing together scholars from STEM, social sciences, and humanities fields to spark new conversations, ideas, and approaches. Our community comprises 150+ researchers from the University of Toronto, encompassing 34 different faculties and departments and 20 unique academic disciplines.

Comprising diverse areas of inquiry—from safe and aligned AI development, fairness in machine learning, trusted data sharing, and AI for social good to legal design, systems of governance, ethics, education, and human rights—our research agenda crosses traditional boundaries and is fundamentally inspired by a commitment to reinventing from the ground up.

We seek to increase the range and depth of interdisciplinary research in the AI and society ecosystem at the University of Toronto and beyond through ongoing initiatives such as our seminars, discussion groups, conferences, and workshops. We support scholars across disciplines to foster new research programs, and connect cross-disciplinary research teams to grants and philanthropy initiatives.

We foster and advance new fields of research to deepen our understanding of how powerful technologies are reshaping our world. This includes developing novel areas of research, promoting new collaborations, and helping to grow a community of diverse, globally-engaged scholars.

Our work contributes to new forms of governance for powerful new technologies by integrating technical solutions and social analyses with legal and regulatory frameworks. Our focus is to develop democratic, agile, and efficient governance that works at a global scale, addressing growing gaps between the public and private sectors, and aligning technology with human agency and values.

 

David Lie is developing techniques to ensure AI models are robust, fair, and interpretable, and establishing guidelines for AI use to ensure regulatory compliance.

Sheila McIlraith

Sheila McIlraith’s research explores how to build safe and human-compatible AI systems, and she co-leads the Embedded Ethics Education Initiative.

Beth Coleman is exploring how trust influences our interactions with AI systems and frameworks to be integrated into new forms of policy design.

Karina Vold has been recognized with an AI2050 Early Career Fellowship from Schmidt Futures for her research on the social and ethical implications of AI technologies.

 
Explore our research community

Research Initiatives

 

An End-to-End Approach to Safe and Secure AI Systems

SRI Director David Lie, together with 18 collaborators—including five SRI researchers—leads a $5.6 million NSERC–CSE–funded initiative advancing critical AI safety research. Their project, “An End-to-End Approach to Safe and Secure AI Systems,” develops methods for training AI models in low-data environments, improving robustness and fairness, and establishing practical compliance frameworks. Bringing together expertise from four Canadian universities, the team addresses safety challenges across the entire AI pipeline—from data integrity to formal verification.

Learn more

Embedded Ethics Education Initiative

A collaboration between the University of Toronto’s Department of Computer Science and Schwartz Reisman Institute for Technology and Society in association with the Department of Philosophy, the Embedded Ethics Education Initiative (E3I) is a teaching and learning venture that embeds paired ethics-technology education modules into undergraduate computer science courses. As challenges such as AI safety, data privacy, and misinformation become increasingly prevalent, E3I provides students with the ability to critically assess the societal impacts of the technologies they will be designing and developing throughout their careers. Engaging thousands of students annually, the E3I program reaches every U of T computer science undergraduate and is currently being piloted in other STEM disciplines, including statistics, ecology and evolutionary biology, and geography.

In recognition of the program’s impact, project leads Sheila McIlraith, Diane Horton, David Liu, and Steven Coyne received the prestigious Northrop Frye Award from U of T’s Alumni Association in 2024. In 2025, the E3I team was honoured with the national D2L Innovation Award in Teaching and Learning, in recognition of its groundbreaking approach to integrating ethics into computer science education.

Learn more

AI & Trust Working Group

The AI & Trust Working Group is a multinational, transdisciplinary initiative led by SRI Research Lead Beth Coleman and convened by the Schwartz Reisman Institute for Technology and Society. The group examines how trust in AI is formed, challenged, and sustained across social, technical, and institutional contexts. Bringing together 70 participants from academia, industry, government, and civil society, it explores how trust operates in human–AI interactions, how institutions demonstrate trustworthiness, and how cultural and political contexts shape public confidence in emerging technologies.

Through monthly hybrid meetings and collaborative research across three thematic clusters—governance, law and rights, and community experience—the working group develops shared frameworks, case studies, and policy insights that address the multifaceted nature of trust. Its work aims to clarify what trustworthy AI entails, support institutions and policymakers in designing accountable systems, and advance a more holistic understanding of how AI can serve democratic values and societal wellbeing.

Learn more

Post-AGI Workshop

The Post-AGI Workshop brings together leading researchers to explore how society might evolve after the development of transformative, controllable AGI. Led by Schwartz Reisman Chair David Duvenaud, the workshop series examines how advanced AI could reshape economic structures, cultural life, and global governance. With speakers from institutions including Stanford, Google DeepMind, Oxford, and the University of Toronto, the event fosters interdisciplinary collaboration across machine learning, political science, economics, philosophy, and history. It aims to deepen understanding of post-AGI futures and nurture a growing research community focused on long-term societal impacts of advanced AI.

The workshop was first presented in July 2025 at ICML in Vancouver, with a follow up in December 2025 at NeurIPS in San Diego. Session recordings from the first workshop are available to watch online.

Learn more

Global Public Opinion on Artificial Intelligence

The Global Public Opinion on Artificial Intelligence (GPO-AI) is a report by the Schwartz Reisman Institute for Technology and Society in collaboration with the Policy, Elections, and Representation Lab (PEARL) at U of T’s Munk School of Global Affairs & Public Policy, led by SRI Associate Director and Munk School and PEARL Director Peter Loewen. The report examines public perceptions of and attitudes toward AI conducted in 12 languages across 21 countries, with more than 23,000 responses. The final report includes commentaries on topics such as consumer use of AI systems like ChatGPT, questions of justice, consumer behaviour, and labour, and broad insights on global attitudes towards AI, as well as how these attitudes differ across countries and regions.

The full report was released in May 2024, with findings included in the Stanford Institute for Human-Centred Artificial Intelligence (HAI)’s 2024 AI Index Report and presented at SRI’s Absolutely Interdisciplinary 2024 conference.

Learn more

Designing normative AI systems

To be effective and fair decision-makers, machine learning systems need to make normative decisions much like human beings do. Inaugural Schwartz Reisman Chair in Technology and Society Gillian Hadfield describes how decisions made by ML models can be improved by labelling data that explicitly reflects value judgments. 

Hadfield and her collaborators demonstrate that reasoning about human norms is qualitatively different from reasoning centred only on factual information, and that the development of AI systems that are adequately sensitive to social norms will require more than the curation of impartial data sets, and must include careful consideration of the types of judgements we want to reproduce.

Learn more
 
 

Research Programming

Schwartz Reisman Institute Fellowships Program

The Schwartz Reisman Institute supports innovative research at the University of Toronto through its Graduate Fellowships program, which are awarded to projects exploring the social impacts of advanced technologies through interdisciplinary approaches. Since 2020, the Institute has awarded more than 100 fellowships with a total value of more than $1M.

Learn more

SRI Seminar Series

The SRI Seminar Series provides a unique opportunity to explore cutting-edge research from a wide range of fields on how the latest developments in artificial intelligence and data-driven systems are impacting society. From ethics to engineering, law, computer science, and more, SRI Seminars offer a wide range of perspectives into new ideas and approaches, building bridges across disciplines.

Since 2020, the series has established itself as a key convenor for the SRI community and beyond. Guests include Luciano Floridi (Yale), Cynthia Dwork (Harvard), Beth Simone Noveck (Northeastern), Iason Gabriel (Google DeepMind), Barbara Grosz (Harvard), Jon Kleinberg (Cornell), Arvind Narayanan (Princeton), Seth Lazar (ANU), Shannon Vallor (U. Edinburgh), and Sanmi Koyejo (Stanford). Recordings are posted to SRI’s YouTube channel, serving as a valuable resource for researchers, educators, and students.

Learn more

SRI Discussion Groups

The Schwartz Reisman Institute’s research community hosts several discussion groups on key focus topics, including:

  • AI Safety, led by Sheila McIlraith and Toryn Klassen

  • Privacy, led by Lisa Austin, Madison Mackley, and Wenjun Qiu

  • Cybersecurity Lab, led by David Lie

  • Periscope Lab, led by Karina Vold

Learn more
 
 

Latest Research Stories

READ MORE RESEARCH STORIES