Research
Integrating research across traditional boundaries to deepen our knowledge of technologies, societies, and what it means to be human.
Research Overview
Our Research Stream is developing new methods for exploring the ways technology, systems, and society interact.
The Schwartz Reisman Institute supports ground-breaking interdisciplinary research on the social impacts of advanced technologies by bringing together scholars from STEM, social sciences, and humanities fields to spark new conversations, ideas, and approaches. Our community comprises 150+ researchers from the University of Toronto, encompassing 34 different faculties and departments and 20 unique academic disciplines.
Comprising diverse areas of inquiry—from safe and aligned AI development, fairness in machine learning, trusted data sharing, and AI for social good to legal design, systems of governance, ethics, education, and human rights—our research agenda crosses traditional boundaries and is fundamentally inspired by a commitment to reinventing from the ground up.
We seek to increase the range and depth of interdisciplinary research in the AI and society ecosystem at the University of Toronto and beyond through ongoing initiatives such as our seminars, discussion groups, conferences, and workshops. We support scholars across disciplines to foster new research programs, and connect cross-disciplinary research teams to grants and philanthropy initiatives.
We foster and advance new fields of research to deepen our understanding of how powerful technologies are reshaping our world. This includes developing novel areas of research, promoting new collaborations, and helping to grow a community of diverse, globally-engaged scholars.
Our work contributes to new forms of governance for powerful new technologies by integrating technical solutions and social analyses with legal and regulatory frameworks. Our focus is to develop democratic, agile, and efficient governance that works at a global scale, addressing growing gaps between the public and private sectors, and aligning technology with human agency and values.
Research Initiatives
Building safe and secure AI systems
SRI Director David Lie, alongside 18 collaborators—including five SRI researchers—has secured $5.6 million in grants from NSERC and CSE to tackle critical AI safety challenges. Their project, "An End-to-End Approach to Safe and Secure AI Systems," aims to develop methods for training AI models in low-data environments, ensure AI robustness and fairness, and establish compliance guidelines. The interdisciplinary team, hailing from four Canadian universities, will address diverse aspects of the AI pipeline, from data security to formal verification. This grant marks a significant step for AI safety research in Canada, reflecting growing concerns about AI's potential risks.
Embedded Ethics Education Initiative (E3I)
A collaboration between the University of Toronto’s Department of Computer Science and Schwartz Reisman Institute for Technology and Society in association with the Department of Philosophy, the Embedded Ethics Education Initiative (E3I) is a teaching and learning venture that embeds paired ethics-technology education modules into undergraduate computer science courses. As challenges such as AI safety, data privacy, and misinformation become increasingly prevalent, E3I provides students with the ability to critically assess the societal impacts of the technologies they will be designing and developing throughout their careers.
Engaging thousands of students annually, the E3I program will reach every U of T computer science undergraduate by 2025. In recognition of the program’s impact, project leads Sheila McIlraith, Diane Horton, David Liu, and Steven Coyne received the prestigious Northrop Frye Award from U of T’s Alumni Association in 2024. The initiative is currently being piloted in other STEM disciplines, including statistics, ecology and evolutionary biology, and geography.
Global Public Opinion on Artificial Intelligence (GPO-AI)
The Global Public Opinion on Artificial Intelligence (GPO-AI) is a report by the Schwartz Reisman Institute for Technology and Society in collaboration with the Policy, Elections, and Representation Lab (PEARL) at U of T’s Munk School of Global Affairs & Public Policy, led by SRI Associate Director and Munk School and PEARL Director Peter Loewen. Conducted in 2023, the GPO-AI survey is an inquiry on public perceptions of and attitudes toward AI conducted in 12 languages across 21 countries, with more than 23,000 responses. The final report includes commentaries on topics such as ChatGPT, justice, consumer behaviour, and labour, gathering insights on global attitudes towards AI, as well as how these attitudes differ across countries and regions.
The full report was released in May 2024, with findings included in the Stanford Institute for Human-Centred Artificial Intelligence (HAI)’s 2024 AI Index Report and presented at SRI’s Absolutely Interdisciplinary 2024 conference.
Working Group on Trust in Human-ML Interaction
Trust is a pivotal concept in interactions between humans and machine learning (ML) systems, influencing everything from user adoption to the societal impact of emerging technologies. Do we trust ML systems, and how can they earn and maintain our trust?
These questions lie at the heart of an interdisciplinary working group convened by SRI Research Lead Beth Coleman, which is developing a comprehensive overview of how trust is conceptualized and operationalized across disciplines. The project seeks to identify new approaches to understanding trust and establish a deeper understanding of the role of trust in our interactions with technology.
Designing normative AI systems
To be effective and fair decision-makers, machine learning systems need to make normative decisions much like human beings do. In her recent research, Schwartz Reisman Inaugural Chair Gillian Hadfield describes how decisions made by ML models can be improved by labelling data that explicitly reflects value judgments.
Through their work, Hadfield and her collaborators demonstrate that reasoning about human norms is qualitatively different from reasoning centred only on factual information, and that the development of AI systems that are adequately sensitive to social norms will require more than the curation of impartial data sets, and must include careful consideration of the types of judgements we want to reproduce.
Research Programming
Schwartz Reisman Institute Fellowships Program
The Schwartz Reisman Institute supports innovative research at the University of Toronto through its Faculty and Graduate Fellowships programs, which are awarded to projects exploring the social impacts of advanced technologies through interdisciplinary approaches. Since 2020, the Institute has awarded more than 100 fellowships with a total value of more than $1M. In 2023, SRI further expanded its fellowships program to support researchers beyond U of T through the creation of a new Scholar-in-Residence position.
SRI Seminar Series
The SRI Seminar Series provides a unique opportunity to explore cutting-edge research from a wide range of fields on how the latest developments in artificial intelligence and data-driven systems are impacting society. From ethics to engineering, law, computer science, and more, SRI Seminars offer a wide range of perspectives into new ideas and approaches, building bridges across disciplines.
Since 2020, the series has established itself as a key convenor for the Schwartz Reisman Institute community and beyond. Past guests include Luciano Floridi (Yale Digital Ethics Center), Beth Simone Noveck (Northeastern University), Iason Gabriel (Google DeepMind), Barbara Grosz (Harvard University), Jon Kleinberg (Cornell University), Arvind Narayanan (Princeton University), Priya Donti (MIT), Seth Lazar (ANU), Shannon Vallor (University of Edinburgh), and Sanmi Koyejo (Stanford University). Recordings are posted to SRI’s YouTube channel, serving as a valuable resource for researchers, educators, and students.
SRI Discussion Groups
The Schwartz Reisman Institute’s research community hosts several discussion groups on key focus topics, including:
AI Safety, led by Sheila McIlraith, Toryn Klassen, and Michael Zhang
Privacy, led by Lisa Austin and Michael Beauvais
Cybersecurity, led by David Lie
Periscope Lab, led by Karina Vold