Navigating trust in AI: Call for expressions of interest

 

SRI Research Lead Beth Coleman is leading a transdisciplinary working group focused on trust in AI design and governance, which will convene during the 20252026 academic year. Applicants wishing to participate are encouraged to submit expressions of interest by August 15, 2025. Photo credit: Connor Schneider/Unsplash


As artificial intelligence (AI) systems rapidly expand across sectors—from healthcare and education to criminal justice and public administration—a deeper exploration of trust, both in theory and practice, is urgently needed. AI systems are now embedded in everyday decisions, shaping human experiences and social outcomes in visible and invisible ways. Yet despite growing investment in technical safeguards and regulatory frameworks, there is no established framework for how or why society should trust AI.

To address this urgent challenge, the Schwartz Reisman Institute for Technology and Society (SRI) at the University of Toronto is launching a call for expressions of interest to join the AI and Trust Working Group, a multinational and transdisciplinary research initiative. 

As a working group, our goal is to contextualize and test trust and trustworthiness in AI systems, specifically how they impact social, cultural, and political ecosystems. We seek to better understand which sociotechnical factors can shape human–machine interactions toward more accountable and beneficial outcomes. While many current approaches to AI safety and alignment focus on performance metrics or an aspirational universal ethics, we are interested in trust and trustworthiness through a productively adversarial methodological engagement. This engagement spans various scales of use and AI systems, allowing us to address questions of alignment, authority, and control across personal, productive, platform, political, and philosophical values.

This initiative will convene a distinguished group of global experts—leading scholars, seasoned practitioners, forward-thinking policymakers, and influential civil society leaders—alongside doctoral and postdoctoral researchers to explore a foundational question: 

As AI technologies transform our societies, how can we build systems that earn and maintain our trust while preserving human agency and democratic legitimacy?

About the AI and Trust Working Group

Led by SRI Research Lead Beth Coleman, a professor at both U of T’s Institute of Communication, Culture, Information and Technology and Faculty of Information, and director of the Knowledge Media Design Institute, SRI’s AI and Trust Working Group offers a unique opportunity to contribute to an intellectual commons and examine critical questions around trust and technology across disciplinary, sectoral, and cultural boundaries. Through collaborative exchange, participants will co-develop themes, contribute ideas, and shape workshop content and structure.

The working group will involve a year-long commitment from September 2025 through May 2026, with monthly, hybrid meetings designed to foster ongoing collaboration and dialogue. This will include a one-day, in-person workshop in Fall 2025 hosted at the Schwartz Reisman Innovation Campus that convenes participants for a day of collective insight and discussion. Structured to foster both creative thinking and practical outcomes, the working group’s agenda will explore foundational questions, such as:

  • How does trust operate in human–AI relationships across different sectors?

  • What institutional mechanisms can foster transparency, accountability, and responsiveness in AI governance?

  • How might trust be cultivated or challenged in cross-cultural, high-stakes, or power-imbalanced settings?

  • How can the design and adoption of advanced technologies be aligned with democratic and societal values?

Throughout the working group period, participants will contribute to shaping an agenda, developing case studies, and identifying practical mechanisms for building trustworthy AI systems, ultimately culminating in the production of publications to convey insights to larger audiences. 

Outcomes and impact

This is a rare opportunity to shape global conversations on trustworthiness and AI at a crucial moment. The initiative addresses a critical interdisciplinary need in the global AI landscape: how to build systems that societies, institutions, and individuals can trust.

The working group will foster long-term networks among participants, creating lasting research and policy impact.

Insights from the group will be synthesized and published by SRI in a public-facing output designed to inform policy, institutional design, and global discussions on trustworthy AI. Participants will play a direct role in shaping this publication, which is intended to serve as a resource for policymakers, practitioners, and academics working at the intersection of AI and society.

Eligibility and expression of interest

SRI welcomes expressions of interest from individuals working in areas including, but not limited to: AI ethics, law and policy, human-computer interaction, engineering, design, political science, economics, statistics, philosophy, science and technology studies, international governance, and the environmental and social impacts of AI.

We are especially seeking applicants with a strong interest in cross-disciplinary collaboration across technical fields and the social sciences and humanities, and who are committed to exploring trust as a social, cultural, institutional, and technical phenomenon. We aim to foreground socio-technical frameworks that illuminate how trust in AI systems is shaped by broader societal contexts.

Travel and accommodation funding will be available for select participants based outside Toronto to attend the in-person workshop.

To express interest in joining the AI and Trust Working Group, please complete the online form and include the following materials:

  • A statement of interest (max. 500 words) outlining:

    • Your disciplinary background

    • How your perspective or expertise relates to the theme of trust and AI

    • Your interest in contributing to the collaborative working group and workshop

  • An up-to-date curriculum vitae (CV)

  • One writing sample of your academic or professional research

Deadline: Applications will be accepted on a rolling basis until August 15, 2025, 11:59 p.m. Eastern Time


About the Schwartz Reisman Institute

The Schwartz Reisman Institute for Technology and Society supports and integrates world-class research across sectors and disciplines to deepen our understanding of advanced technologies, law, institutions, regulatory structures, and social values. We foster interdisciplinary ideas, insights, and understandings of how technology affects society and individuals, with a goal of rethinking technology’s role in society, the contemporary needs of human communities, and the systems that govern them.  

Our mission is to deepen our knowledge of advanced technologies, societies, and what it means to be human by actively integrating research across traditional boundaries and fostering human-centered solutions that really make a difference—for everyone. 

 

 

Browse stories by tag:

 
Next
Next

Absolutely Interdisciplinary 2025 explores new frontiers in AI research