AI & Trust Working Group

SRI’s AI & Trust Working Group is a transdisciplinary initiative exploring how societies build, sustain, and negotiate trust in AI systems. Bringing together stakeholders from academia, policy, industry, and civil society, the group examines the social, technical, and institutional conditions that shape trustworthy AI and influence public confidence, accountability, and democratic oversight.

Jump to:

Overview / Why trust? / About the working group / Project lead / Research clusters

Overview

SRI’s AI & Trust Working Group is a multinational, transdisciplinary research initiative focused on critically examining the role of trust and trustworthiness in artificial intelligence (AI) systems. 

Led by SRI Research Lead Beth Coleman and convened by the Schwartz Reisman Institute for Technology and Society (SRI) at the University of Toronto, the working group will explore how trust functions, is cultivated, challenged, or eroded in human–AI interactions and across institutional and societal scales. 

The initiative brings together a distinguished global cohort of scholars, practitioners, policymakers, and civil society leaders, alongside doctoral and postdoctoral researchers, to examine one of the most urgent questions in the global AI landscape:

As AI technologies reshape our societies, how can we build systems that earn and maintain human trust while preserving agency, accountability, and democratic legitimacy?

Through collaborative research, shared frameworks, and cross-sector dialogue, the AI & Trust Working Group aims to generate actionable insights that can inform both policy and practice. Its work will support a deeper understanding of how trust is formed and sustained in rapidly evolving technological environments, and will help chart pathways toward more transparent, accountable, and human-centred AI systems.

Why trust?

Across sectors—from health and education to labor markets, public administration, and justice systems—AI now shapes decisions with profound implications for people’s lives. Yet despite rapid advances in technical safeguards and the emergence of new regulatory frameworks, societies still lack a clear, shared understanding of what trust in AI actually means, how it is formed, and how it can be sustained across cultural, institutional, and geopolitical contexts.

As AI systems are adopted at unprecedented speed, trust becomes a paramount concern. Trust is a multifactorial phenomenon that intersects with questions of safety, ethics, information integrity, human-computer interaction, and broader societal expectations. It cannot be reduced to technical performance alone; rather, it emerges through relationships between people, institutions, and technologies.

Trust in AI reflects:

  • Public confidence in the institutions that deploy and oversee AI

  • Cultural and historical experiences with technology and authority

  • Governance structures that define responsibility and accountability

  • Expectations around fairness, transparency, and competence

  • Lived experience, power dynamics, and the contexts in which AI is used

Understanding trust as an ongoing negotiation—continuously built, tested, and recalibrated—is essential for designing systems that genuinely support societal well-being.

SRI’s AI & Trust Working Group addresses this need by examining trust as a sociotechnical, relational, and evolving phenomenon. The goal is not only to interrogate what trust and trustworthiness mean in complex AI systems, but to develop actionable frameworks that can guide institutions, policymakers, and practitioners toward more accountable and reliable AI deployment.

About the AI & Trust Working Group

Led by SRI Research Lead Beth Coleman, associate professor of data and cities at the University of Toronto’s Institute of Communication, Culture, Information and Technology and Faculty of Information, and Faculty of Information, the AI & Trust Working Group creates a collaborative intellectual space where participants:

  • Analyze how trust operates in human–AI relationships

  • Explore trustworthiness as both a technical and institutional property

  • Develop practical mechanisms for more accountable and transparent AI systems

  • Consider trust across cultures, political systems, and high-stakes contexts

  • Examine how values—personal, professional, organizational, platform, and political—shape trust relationships

The initiative centers a methodological approach that encourages participants to interrogate trust not as a given but as a negotiated, dynamic process.

From September 2025 to May 2026, the working group meets monthly in hybrid format, culminating in a one-day in-person workshop, convening participants for collective insight, case study exploration, and agenda setting. Monthly sessions provide research presentations and collaborative exchanges exploring cross-cutting themes such as: how trust is built, eroded, or reshaped in human–AI interactions; institutional mechanisms for transparency, accountability, and legitimacy; trust in cross-cultural and power-imbalanced contexts; alignment, authority, and control across different scales of AI use; and the relationship between trustworthy systems and trustworthy institutions.

Participants will work toward creating case studies, conceptual frameworks, policy and governance insights, co-authored publications, and a public-facing report synthesizing insights to inform policymakers, practitioners, and researchers working at the intersection of AI and society.

The goals of the AI & Trust Working Group are to shape international conversations on trust and AI, build long-term networks for future collaboration, support policymakers and institutions as they design trustworthy AI governance, and contribute frameworks that reflect cultural, institutional, and societal diversity.

By centering trust as a social and institutional relationship, not just a technical problem, the initiative seeks to advance a more holistic understanding of how AI can serve democratic and public-interest values.

Project lead

Beth Coleman

Associate Professor, Institute of Communication, Culture, Information and Technology and Faculty of Information, University of Toronto

Director, Knowledge Media Design Institute

Research Lead, Schwartz Reisman Institute for Technology and Society

Beth Coleman is an associate professor of data and cities at the Institute of Communication, Culture, Information and Technology and the Faculty of Information at the University of Toronto. Working in the disciplines of science and technology studies, generative aesthetics and Black poesis, her research focuses on smart technology and machine learning, urban data, civic engagement and generative arts.

Coleman is a founding member of the Trusted Data Sharing group and a research lead on AI policy and praxis at the Schwartz Reisman Institute for Technology and Society. Coleman's other affiliations include roles as senior visiting researcher with Google Brain and Responsible AI, a 2021 Google Artists and Machines Intelligence award, and appointments at the Berkman Klein Center for Internet and Society at Harvard University, Microsoft Research New England, the Data and Society Institute in New York and expert consulting for the European Commission Digital Futures. She served as the founding director of the University of Toronto Black Research Network Institute Strategic Initiative.

Coleman is the author of Hello Avatar (MIT Press, 2011) and multiple articles, including “Race as Technology.” Her recent projects include the book and exhibition Reality Was Whatever Happened: Octavia Butler AI and Other Possible Worlds (K. Verlag Press, Berlin, 2023).

Research clusters

The working group consists of 70 members across academia, industry, government, and civil society selected for their expertise in AI ethics, law and policy, human–computer interaction, engineering, design, political science, science and technology studies, philosophy, and related fields. The group has representation from Canada, the United States, Mexico, South America, Europe, Africa, and Australia. Participants range from leading senior scholars to practitioners from industry, government, and policy contexts, as well as graduate students.

Given the breadth and complexity of trust in AI, the working group is organized into three research clusters. This structure creates focused environments for deep inquiry while enabling cross-cluster exchange on shared challenges. Each cluster brings together diverse expertise to examine trust from complementary angles—governance, law and rights, and community experience—reflecting the multifaceted and sociotechnical nature of trust itself. The clusters allow participants to work at different scales of analysis and practice, from institutional design and regulatory frameworks to lived experience and societal wellbeing, ensuring that insights remain grounded, interdisciplinary, and actionable.

Cluster 1: AI Governance, Policy, and Institutional Design

This cluster focuses on the systems and structures that govern how AI technologies are developed, deployed, and overseen. Members will examine public, private, and hybrid governance mechanisms—from regulatory frameworks and standards to institutional interventions—that aim to ensure accountability, transparency, and democratic legitimacy. The work emphasizes the socio-technical dimensions of policy and design, exploring how organizational and technical decisions co-shape outcomes in practice.

Cluster 2: Law, Rights, and Justice in AI Systems

This cluster investigates how AI technologies interact with existing legal, ethical, and social frameworks, and how those frameworks may need to evolve. Members will explore questions of fairness, responsibility, and legitimacy through both legal and technical lenses, focusing on how power operates within socio-technical systems. By integrating participants from legal, philosophical, and technical backgrounds, the cluster aims to bridge conceptual analysis with practical governance challenges.

Cluster 3: Communities, Wellbeing, and Societal Trust

This cluster addresses how AI systems shape and are shaped by lived experience, collective wellbeing, and public trust. Topics span health, education, and civic life, as well as how AI influences social cohesion, equity, and inclusion at the community level. Members will consider how local contexts, institutional practices, and cultural narratives contribute to broader patterns of societal trust in AI.

Learn more