AI & Trust Working Group

SRI’s AI & Trust Working Group is a transdisciplinary initiative exploring how societies build, sustain, and negotiate trust in AI systems. Bringing together stakeholders from academia, policy, industry, and civil society, the group examines the social, technical, and institutional conditions that shape trustworthy AI and influence public confidence, accountability, and democratic oversight.

Jump to:

Overview / Why trust? / About the working group / Project lead / Research clusters / Group Members / Workshop

Overview

SRI’s AI & Trust Working Group is a multinational, transdisciplinary research initiative focused on critically examining the role of trust and trustworthiness in artificial intelligence (AI) systems. 

Led by SRI Research Lead Beth Coleman and convened by the Schwartz Reisman Institute for Technology and Society (SRI) at the University of Toronto, the working group will explore how trust functions, is cultivated, challenged, or eroded in human–AI interactions and across institutional and societal scales. 

The initiative brings together a distinguished global cohort of scholars, practitioners, policymakers, and civil society leaders, alongside doctoral and postdoctoral researchers, to examine one of the most urgent questions in the global AI landscape:

As AI technologies reshape our societies, how can we build systems that earn and maintain human trust while preserving agency, accountability, and democratic legitimacy?

Through collaborative research, shared frameworks, and cross-sector dialogue, the AI & Trust Working Group aims to generate actionable insights that can inform both policy and practice. Its work will support a deeper understanding of how trust is formed and sustained in rapidly evolving technological environments, and will help chart pathways toward more transparent, accountable, and human-centred AI systems.

Why trust?

Across sectors—from health and education to labor markets, public administration, and justice systems—AI now shapes decisions with profound implications for people’s lives. Yet despite rapid advances in technical safeguards and the emergence of new regulatory frameworks, societies still lack a clear, shared understanding of what trust in AI actually means, how it is formed, and how it can be sustained across cultural, institutional, and geopolitical contexts.

As AI systems are adopted at unprecedented speed, trust becomes a paramount concern. Trust is a multifactorial phenomenon that intersects with questions of safety, ethics, information integrity, human-computer interaction, and broader societal expectations. It cannot be reduced to technical performance alone; rather, it emerges through relationships between people, institutions, and technologies.

Trust in AI reflects:

  • Public confidence in the institutions that deploy and oversee AI

  • Cultural and historical experiences with technology and authority

  • Governance structures that define responsibility and accountability

  • Expectations around fairness, transparency, and competence

  • Lived experience, power dynamics, and the contexts in which AI is used

Understanding trust as an ongoing negotiation—continuously built, tested, and recalibrated—is essential for designing systems that genuinely support societal well-being.

SRI’s AI & Trust Working Group addresses this need by examining trust as a sociotechnical, relational, and evolving phenomenon. The goal is not only to interrogate what trust and trustworthiness mean in complex AI systems, but to develop actionable frameworks that can guide institutions, policymakers, and practitioners toward more accountable and reliable AI deployment.

About the AI & Trust Working Group

Led by SRI Research Lead Beth Coleman, associate professor of data and cities at the University of Toronto’s Institute of Communication, Culture, Information and Technology and Faculty of Information, and Faculty of Information, the AI & Trust Working Group creates a collaborative intellectual space where participants:

  • Analyze how trust operates in human–AI relationships

  • Explore trustworthiness as both a technical and institutional property

  • Develop practical mechanisms for more accountable and transparent AI systems

  • Consider trust across cultures, political systems, and high-stakes contexts

  • Examine how values—personal, professional, organizational, platform, and political—shape trust relationships

The initiative centers a methodological approach that encourages participants to interrogate trust not as a given but as a negotiated, dynamic process.

From September 2025 to May 2026, the working group meets monthly in hybrid format, culminating in a one-day in-person workshop, convening participants for collective insight, case study exploration, and agenda setting. Monthly sessions provide research presentations and collaborative exchanges exploring cross-cutting themes such as: how trust is built, eroded, or reshaped in human–AI interactions; institutional mechanisms for transparency, accountability, and legitimacy; trust in cross-cultural and power-imbalanced contexts; alignment, authority, and control across different scales of AI use; and the relationship between trustworthy systems and trustworthy institutions.

Participants will work toward creating case studies, conceptual frameworks, policy and governance insights, co-authored publications, and a public-facing report synthesizing insights to inform policymakers, practitioners, and researchers working at the intersection of AI and society.

The goals of the AI & Trust Working Group are to shape international conversations on trust and AI, build long-term networks for future collaboration, support policymakers and institutions as they design trustworthy AI governance, and contribute frameworks that reflect cultural, institutional, and societal diversity.

By centering trust as a social and institutional relationship, not just a technical problem, the initiative seeks to advance a more holistic understanding of how AI can serve democratic and public-interest values.

Project lead

Beth Coleman

Associate Professor, Institute of Communication, Culture, Information and Technology and Faculty of Information, University of Toronto

Director, Knowledge Media Design Institute

Research Lead, Schwartz Reisman Institute for Technology and Society

Beth Coleman is a Professor of Data & Cities at the University of Toronto’s Institute of Communication, Culture, Information and Technology and Faculty of Information. She is director of the Knowledge Media Design Institute and a research lead on AI policy and praxis at the Schwartz Reisman Institute for Technology & Society, where she runs the AI & Trust international working group. Coleman works in the disciplines of science and technology studies and generative aesthetics with research foci on artificial intelligence, smart technology, urban data and civic engagement, and transmedia arts. She is the author of Hello Avatar (MIT Press, 2011) and Reality Was Whatever Happened: Octavia Butler AI and Other Possible Worlds (K. Verlag Press, 2023). She has been a 2023-24 Google Brain and Responsible AI senior visiting researcher as well as a 2021 Google Artists and Machines Intelligence awardee. Her research affiliations have included the Berkman Klein Center for Internet & Society, Harvard University; Microsoft Research New England; Data & Society Institute, New York; and expert consultant for the European Commission Digital Futures. She is currently working on a monograph, AI in the World: Perils and Possibilities of a General Purpose Technology.

Research clusters

The working group consists of 70 members across academia, industry, government, and civil society selected for their expertise in AI ethics, law and policy, human–computer interaction, engineering, design, political science, science and technology studies, philosophy, and related fields. The group has representation from Canada, the United States, Mexico, South America, Europe, Africa, and Australia. Participants range from leading senior scholars to practitioners from industry, government, and policy contexts, as well as graduate students.

Given the breadth and complexity of trust in AI, the working group is organized into three research clusters. This structure creates focused environments for deep inquiry while enabling cross-cluster exchange on shared challenges. Each cluster brings together diverse expertise to examine trust from complementary angles—governance, law and rights, and community experience—reflecting the multifaceted and sociotechnical nature of trust itself. The clusters allow participants to work at different scales of analysis and practice, from institutional design and regulatory frameworks to lived experience and societal wellbeing, ensuring that insights remain grounded, interdisciplinary, and actionable.

Cluster 1: AI Governance, Policy, and Institutional Design

This cluster focuses on the systems and structures that govern how AI technologies are developed, deployed, and overseen. Members will examine public, private, and hybrid governance mechanisms—from regulatory frameworks and standards to institutional interventions—that aim to ensure accountability, transparency, and democratic legitimacy. The work emphasizes the socio-technical dimensions of policy and design, exploring how organizational and technical decisions co-shape outcomes in practice.

Cluster 2: Law, Rights, and Justice in AI Systems

This cluster investigates how AI technologies interact with existing legal, ethical, and social frameworks, and how those frameworks may need to evolve. Members will explore questions of fairness, responsibility, and legitimacy through both legal and technical lenses, focusing on how power operates within socio-technical systems. By integrating participants from legal, philosophical, and technical backgrounds, the cluster aims to bridge conceptual analysis with practical governance challenges.

Cluster 3: Communities, Wellbeing, and Societal Trust

This cluster addresses how AI systems shape and are shaped by lived experience, collective wellbeing, and public trust. Topics span health, education, and civic life, as well as how AI influences social cohesion, equity, and inclusion at the community level. Members will consider how local contexts, institutional practices, and cultural narratives contribute to broader patterns of societal trust in AI.

Working group members

  • Cluster members:

    • Fernanda Buril Almeida, Deputy Director, Center for Applied Research and Learning, International Foundation for Electoral Systems

    • Mark Daley, Professor, Centre for Brain and Mind, Western University

    • Julian Granka Ferguson, Director, AI & Ethics Governance, Scotiabank

    • Laura Fichtner, Postdoctoral Researcher, Institute for Trustworthy AI in Law and Society, University of Maryland

    • Vanessa Gathecha, Curator, Edgelands Institute

    • Shayan Koeksal, PhD Candidate, Department of Philosophy, Stanford University 

    • Alyssa Lefaivre Škopac, Director, AI Trust and Safety, Alberta Machine Intelligence Institute 

    • Ankit Mishra, Founder and Principal Consultant, AM Consulting Group

    • Kasra Rafi, Professor, Department of Mathematics, University of Toronto 

    • Safwan Zahid, Master's Student, Munk School of Global Affairs and Public Policy & Rotman School of Management, University of Toronto

    Members at-large:

    • Joy Belgassem, Senior Innovation Developer, AI Competence Center (AI-CC) for Public Administration, Bundesdruckerei's Innovation Department

    • Michael Brent, Director - Responsible AI, Boston Consulting Group

    • Jin Sol Kim, Postdoctoral Fellow, English Language and Literature, University of Waterloo

    • Elisha Lim, Assistant Professor, Faculty of Liberal Arts & Professional Studies, York University

    • Shingai Manjengwa, Senior Director, Education and Development, Talent & Ecosystem, Mila - Quebec Artificial Intelligence Institute

    • Fenwick McKelvey, Associate Professor, Department of Communication Studies , Concordia University

    • Stephanie Oldfield, Director, Digital and Data Policy, Government of Ontario

    • Anna Romandash, Fellow, Centre for International Governance Innovation

    • Jonathan Smith, Staff Machine Learning Engineer, Meta

    • Sriram Ganapathi Subramanian, Assistant Professor, School of Computer Science, Carleton University

    • Remziye Zaim, Postdoctoral Researcher in Responsible AI Governance in Health, Dalla Lana School of Public Health, University of Toronto

    • Yolanda Zhang, An Wang Postdoctoral Fellow, Fairbank Center for Chinese Studies, Harvard University

  • Cluster members:

    • Ashton Black, PhD Student, Department of Philosophy, York University

    • Joshua Brecka, PhD Candidate, Deartment. of Philosophy, University of Toronto

    • Gabriela Mazorra de Cos, Workstream Lead (ex-Council member & Chair of AI Governance Working Group), AI Forum New Zealand

    • Alicia Demanuele, Policy Researcher, Schwartz Reisman Institute for Technology and Society, University of Toronto

    • Jake Okechukwu Effoduh, Assistant Professor, Lincoln Alexander School of Law, Toronto Metropolitan University

    • Sam Hill, Senior Consultant, Design Research eXplorations team, German Research Center for AI (DFKI)

    • Levin Karg, Head of Modernizing Regulation, Ontario Securities Commission

    • Joshua Krook, Research Fellow, Responsible AI UK, University of Southampton

    • Kevin Armand Laurent, Pre-PhD Candidate, Université Paris

    • Kamil Mamak, Associate Professor, Department of Criminal Law, Jagiellonian University

    • Berenice Fernández Nieto, PhD Student, IMT School for Advanced Studies Lucca

    • Bruce Schneier, Visiting Fellow, Munk School of Global Affairs & Public Policy, University of Toronto

    • Sana Shams, Undergraduate Student, University of British Columbia

    • Sharifa Sultana, Assistant Professor, Siebel School of Computing and Data Science, University of Illinois Urbana-Champaign


    Members at-large:

    • Frédérique Horwood, Senior Counsel, Privacy and AI Regulation, Cohere

    • Nanda Min Htin, LLM Graduate, Georgetown Law

    • Danil Kerimi, Senior Advisor, Tech Envoy on Innovative Financing for AI Capacity Building, United Nations 

    • Tina Lassiter, PhD Candidate, School of Information, University of Texas at Austin

    • Beryl Pong, UKRI Future Leaders Fellow, Centre for the Future of Intelligence, University of Cambridge

    • Donato Ricci, Design Research Lead, médialab, SciencesPo

    • Yuan Stevens, Research Affiliate, Data & Society Research Institute

  • Cluster membeers:

    • Azfar Adib, PhD Candidate, Department of Electrical and Computer Engineering Concordia University

    • Mai Ali, PhD Candidate, Department of Electrical and Computer Engineering, University of Toronto

    • Abeer Badawi, Postdoctoral Researcher, York University

    • Adeola Bamgboje-Ayodele, Lecturer in Design and Innovation, School of Architecture, Design and Planning, University of Sydney

    • Joseph Donia, Postdoctoral Fellow, University of Milan

    • Stuart Duncan, PhD Candidate, Media and Design Innovation, Toronto Metropolitan University

    • Cristina Getson, PhD Candidate, Department of Mechanical and Industrial Engineering, University of Toronto

    • Kalervo N. Gulson, Professor, Faculty of Arts and Social Sciences, University of Sydney

    • Natalie Gyenes, Director of Research, Practice Lab

    • Brian Harrington, Professor (Teaching Stream), Department of Computer Science, University of Toronto Scarborough

    • Wanheng Hu, Embedded Ethics Fellow, Stanford University

    • Wm. Matthew Kennedy, Marie Sklodowska-Curie Postdoctoral Fellow, Oxford Internet Institute, University of Oxford

    • Bran Knowles, Professor, School of Computing and Communications, Lancaster University

    • Peter Lewis, Associate Professor, Faculty of Business and Information Technology, Ontario Tech University

    • Jacqueline Lu, Urbanist-in-Residence, School of Cities, University of Toronto

    • Amy Bliss McHugh, Associate Lecturer, University of Sydney

    • Anna-Lena Theus, PhD Candidate, School of Journalism and Communication, Carleton University

    • Tracy J. Trothen, Professor, School of Religion, Queen's University

    • Azmine Toushik Wasi, Shahjalal University of Science and Engineering


    Members at-large:

    • Karen Chapple, Professor, Department of Geography & Planning, University of Toronto

    • Zsolt Demetrovics, Matthew Flinders Professor in Mental Health and Wellbeing, Institute for Mental Health and Wellbeing at the College of Education, Psychology and Social Work, Flinders University

    • Barbara Prainsack, Professor, Department of Political Science, University of Vienna

    • Anna Tomko, Masters Graduate, University of Hamburg

Mark Daley

Professor in Computer Science & Chief AI Officer, Western University

Barbara Prainsack

Professor in Political Science, University of Vienna

Adeola Bamgboje-Ayodele

Adeola Bamgboje-Ayodele

Lecturer in Design, University of Sydney

Shayan Koeksal

Shayan Koeksal

PhD Candidate in Philosophy, Stanford University

Frédérique Horwood

Frédérique Horwood

Senior Counsel, Privacy and AI Regulation, Cohere

AI & Trust Workshop

March 25, 2026 | Schwartz Reisman Innovation Campus, University of Toronto

Agenda:

8:30 – 9:00 AM | Breakfast and registration

9:00 – 9:15 AM | Opening remarks:

  • Beth Coleman, University of Toronto

  • Gagan Gill, CIFAR

9:15 – 11:00 AM | Keynotes:

  • Mark Daley, Western University

  • Alyssa Lefaivre Škopac, Alberta Machine Intelligence Institute

  • Peter Lewis, Ontario Tech University

  • Maia Fraser, University of Ottawa

11:00 – 11:15 AM | Coffee break

11:15 – 12:00 PM | Moderated panel discussion

12:00 – 1:00 PM | Lunch

1:00 – 2:00 PM | Breakout group session #1: Mapping for trust

2:00 – 2:15 PM | Coffee break

2:15 – 3:15 PM | Breakout group session #2: Solutions for trust

3:15 – 4:00 PM | Report-back and synthesis

4:00 – 4:15 PM | Closing remarks

Workshop sponsors:

This research is supported by the Canadian AI Safety Institute Research Program at CIFAR, the Schwartz Reisman Institute for Technology and Society at the University of Toronto, and by a gift from the Plum Foundation.

CIFAR logo
Schwartz Reisman Institute for Technology and Society logo

Learn more