Building trust in AI: A multifaceted approach
Top from left to right: Beth Coleman, Duncan Cass-Beggs and Matthew da Mota
Bottom from left to right: Monique Crichlow, Donato Ricci, and Katharina Zügel
Understanding how trust is built between groups of people, institutions and technologies is essential for thinking about how AI systems can be trusted to reliably address human needs while mitigating risks.
While the concept of trust is widely referenced in AI policy discussions, consensus remains elusive on its meaning, its role in governance, and how it should be integrated into AI development—especially in public service contexts.
To address this, the Schwartz Reisman Institute for Technology and Society (SRI) hosted a roundtable discussion on February 11, 2025, as part of the official side events at the AI Action Summit in Paris. Titled Building Trust in AI: A Multifaceted Approach, the discussion centered on insights from an upcoming SRI paper, Trust in Human-Machine Learning Interactions: A Multifaceted Approach, led by SRI Research Lead Beth Coleman, which examines multidisciplinary approaches to fostering trust in human-machine learning interactions.
Exploring Trust in AI Systems
Joining Coleman on the panel was a diverse set of experts, including Monique Crichlow of SRI, Donato Ricci of Sciences Po Medialab, Katharina Zügel of the Forum on Information and Democracy (the Forum), and Duncan Cass-Beggs and Matthew da Mota of the Global AI Risks Initiative at the Centre for International Governance Innovation. Each brought unique perspectives on trust in AI and its implications for governance, certification, and public adoption.
Defining Trust Beyond AI Systems: Coleman highlighted the importance of defining trust as a broader concept that extends beyond technical systems and into the relationship between humans, institutions and technology and the challenge of understanding how these relationships evolve.
Infrastructure and Ecosystem for Trust: Crichlow emphasized that trust in AI cannot exist in isolation—it requires robust governance structures and accountability mechanisms.
Situational Context: Ricci emphasized the need to analyze the conditions under which trust emerges, arguing that AI cannot be viewed as a monolithic entity but rather as a series of interactions within specific environments.
Certification for Trustworthy AI: Zügel introduced the Forum’s voluntary certification mechanism for public interest AI, positioning it as a tool to guide and ensure responsible AI development.
Global Governance: Cass-Beggs explored how trust factors into the development of international AI governance frameworks, particularly as transformative AI technologies gain prominence.
Bridging Global and Local Perspectives: A co-author of the forthcoming SRI paper, da Mota examined the distinction between ‘trustworthy’ machines and the trust relationships between humans and institutions that deploy AI.
Key Takeaways: Trust as an Action
A central conclusion from the discussion was that trust is an action—continuously performed, reappraised, and evaluated between human actors and institutions. As Coleman noted, "We must move toward a better understanding of AI’s limitations and capabilities rather than getting caught in the noise around its potential." Coleman added that while technical reliability plays a role in trustworthiness, actual trust in AI deployment depends more on public perception of the institutions and individuals managing these systems.
The discussion highlighted that as governments incorporate AI into public services and infrastructure, the regulatory frameworks—whether through treaties, legislation, voluntary codes, or technical standards—will play a crucial role in shaping societal trust in AI. Coleman highlighted this urgency, stating, "Trust is not an object but an ongoing negotiation. Understanding how AI fits within existing trust relationships is key to responsible deployment."
The conception of trust and how it is interpreted in the development of AI systems and governance tools will be central to how societies frame the future goals of AI adoption and the thresholds of safety, transparency, and fairness of AI systems.
Trust and the Broader AI Action Summit
The roundtable’s insights tied into the summit’s broader debates on AI governance, commercialization, and global cooperation.
Notable summit outcomes included:
The launch of the G7 reporting framework for the Hiroshima AI Process (HAIP), setting an international code of conduct for organizations developing advanced AI.
Canada and Japan signing onto the Council of Europe’s Framework Convention on AI, marking a step toward more structured global AI governance.
Updates on the ongoing development of AI standards by CEN and CENELEC Joint Technical Committee 21, aligning with the EU AI Act.
While enthusiasm for formal international AI agreements appears to have tempered, these initiatives reflect continued collaboration across different aspects of AI governance.
It is also important to note that while the AI Action Summit was praised by some for shifting discussions beyond safety and risk to consider AI adoption and economic impact, critics viewed this as a departure from the safety focus established in prior international meetings, such as the Seoul and Bletchley Park summits.
Looking Ahead
The discussion on trust in AI is far from over. As governments and industries race to deploy AI across sectors, ensuring that AI systems are not just technically reliable but also socially and ethically grounded will be a key challenge. The AI Action Summit reinforced that building trust in AI is not solely a technical issue—it is a governance, societal, and philosophical challenge that will shape the future of AI and society worldwide.
Want to learn more?
Read the highlights from the conference: The Path to Safe, Ethical AI: SRI Highlights from the 2025 IASEAI Conference in Paris
Read the report: “Policy and Practice in Data Governance and Sharing: Engaging Toronto’s Digital Infrastructure Strategic Framework (DISF) to Model Trusted Data Sharing” (PDF).
SRI working group investigating the concept of trust from across disciplinary perspectives