SRI working group investigating the concept of trust from across disciplinary perspectives

 
A collage of ten headshots, five by two rows, against a pale blue background.

Top row, L-R: Hiu Fung Chung, Beth Coleman, Matthew da Mota, Alicia Demanuele, Joseph Donia. Bottom row, L-R: Davide Gentile, Camille Hazzard, Leo Huang, Rachel Katz, Atrisha Sarkar.


Psychotherapy chatbots. Smart cities. AI-powered search engines. Can we trust the behaviours, predictions, and pronouncements of the advanced artificial intelligence (AI) systems that are seemingly everywhere in our lives?

“It’s hard today to walk into a social space and not see people either using some form of AI or talking about it,” says SRI Postdoctoral Fellow Atrisha Sarkar, a computer scientist whose research focuses on multi-agent systems, empirical game theory, and other computational methods for safety in AI systems.

“This is quite unprecedented,” says Sarkar, who received her PhD in computer science from the University of Waterloo, where she specialized in interactions between autonomous vehicles and road users.

“Even though there are divergent views on how these AI-based systems should be deployed, there’s one question we’re all asking: Can we trust these systems? And I think this question can only be adequately addressed using a multidisciplinary approach.”

Sarkar is part of a working group led by SRI Research Lead Beth Coleman on investigating the role of trust in human interactions with machine learning (ML) systems. Coleman’s work as associate professor at UTM’s Institute of Communication, Culture, Information and Technology and U of T’s Faculty of Information has spanned everything from smart technology and ML to urban data and civic engagement to generative arts—as in her recent project Reality was Whatever Happened: Octavia Butler AI and other Possible Worlds.

Coleman is now turning her attention to public policy and the governance of AI, with a particular focus on the words and concepts we deploy in these efforts—pointing out that “the words we use shape the future we create.” 

The significance of trust in this domain cannot be overstated right now. Trust influences everything from user adoption of these tools, to ethical considerations, to the societal impact of emerging technologies, and so much more.
— Beth Coleman

Coleman says the impetus for forming the working group was “to develop a deeper understanding of the role of trust in our interactions with ML systems, and to collaboratively identify new approaches to understanding trust.”

“The significance of trust in this domain cannot be overstated right now,” says Coleman. “Trust influences everything from user adoption of these tools, to ethical considerations, to the societal impact of emerging technologies, and so much more.”

Working group member Davide Gentile, a PhD candidate in the Department of Mechanical and Industrial Engineering and an SRI graduate affiliate, agrees with Sarkar about the timeliness of the group’s work.

“Understanding the dynamics around trust and reliance is particularly crucial today,” says Gentile. “ML systems are increasingly integrated into environments where people may not have the knowledge and skills to understand the limitations of these systems. People also may not fully understand the consequences of unwarranted reliance on these tools. So we have to look carefully at these dynamics right now.”

Another member of the working group, Camille Hazzard, is pursuing her PhD at the Centre for Criminology and Sociolegal Studies. Her work focuses on how data and the use of digital technology regulate human behaviour and shape the way we think about agency and privacy in carceral and non-carceral spaces.

Hazzard says she was interested in joining the working group “because I was in search of an interdisciplinary community of researchers who are passionate about examining the impacts of datafication on society.”

Sarkar, Coleman, Gentile, and Hazzard are joined by a diverse group of scholars, including Rachel Katz, PhD candidate in the Institute for the History & Philosophy of Science & Technology (IHPST); Leo Huang, PhD candidate in Department of Psychology; Matthew da Mota, Digital Policy Hub fellow at the Centre for International Governance Innovation and research associate at U of T’s Media Ethics Lab; Hiu Fung Chung, PhD student in the Faculty of Information; and Joseph Donia, PhD candidate in the Institute of Health Policy, Management and Evaluation (IHPME).

The group also includes SRI Policy Researcher Alicia Demanuele, who is both a member representing public policy and the project’s administrator. In fact, Demanuele and Coleman have already kick-started thinking about the meanings and uses of the term “trust” in a recent piece co-written with SRI Policy Researcher David Baldridge. 

“Trust is a key concept often tossed around a lot in discussions of AI,” says Demanuele. “However, this happens without much understanding of what it actually means to build trust in someone or something, and concurrently what it means for that trust to be lost. Even within the domain of public policy, the range of meanings and applications of trust are vast.”

Rachel Katz’s research at the IHPST examines the pros and cons of AI-facilitated psychotherapy in apps like Bloom and Youper. She’s specifically interested in AI-delivered therapy that doesn’t involve a clinician at any point. 

“I actually don’t believe it’s possible for humans to trust AI,” says Katz. “But I wanted to discuss different perspectives on this issue with colleagues in other fields, which is what drew me to this working group.”

Like Katz, Leo Huang’s work also deals in the realm of psychological and emotional support from AI agents. His research in the Social Psychophysiological Research & Quantitative Methods Laboratory explores how AI agents might provide humans with needed support to facilitate our wellbeing when our (human) social networks fall short in this regard.

But Huang is slightly more optimistic than Katz.

“I want to see if and how we can have useful relationships with AI agents,” says Huang. “When it comes to the benefit and perils of using AI, there’s a lot being said right now about the AI models and agents themselves—but comparatively less attention given to the human side of this human-AI relationship.”

“Coming from a social psychological perspective, I hope to contribute to the conversation on how trust can be defined and improved in the human-AI relationship.”

Like the other working group members, Matthew da Mota also points to the interdisciplinary nature of the group as part of its appeal. His work examines the implementation of ML/AI systems into research institutions such as research libraries and archives.

“Because these institutions and the research networks they form are the foundation for all research and experimentation across disciplines, it’s imperative that any tool be held to a high standard of trustworthiness,” says da Mota.

“The discourse on trustworthy AI is flawed right now,” says da Mota. “There’s an assumption that we agree on what ‘trust’ means—despite the many paradigms for understanding trust. The working group’s task of figuring out what ‘trust’ means across academic disciplines is essential to developing trustworthy AI systems.”

“And my experience studying literature and the philosophy of history has shown me that the humanities are often underrepresented in discussions of technology,” adds da Mota, “so I hope to bring this perspective to the working group.”

The discourse on trustworthy AI is flawed right now. There’s an assumption that we agree on what ‘trust’ means—despite the many paradigms for understanding trust.
— Matthew da Mota

Demanuele echoes da Mota’s sentiments about the group’s interdisciplinarity. “We’re aiming to better understand how trust in ML interactions differs amongst disciplines and what commonalities can be drawn out in terms of building trust.”

From the STEM side, group member Davide Gentile is an applied scientist specializing in human factors engineering—in fact, Gentile has previously written at SRI about exploring the user interaction challenges posed by large language models.

Gentile says it’s crucial to understand trust dynamics in human-ML interaction, as they may predict user reliance “especially in situations characterized by uncertainty and vulnerability.”

Gentile’s PhD research involved conducting human-in-the-loop simulations to understand how people use ML systems for decision making in high-risk environments, and how engineering design principles like transparency and explainability shape user interactions with ML systems. The findings from this research can inform the design of decision aids that support human decisions and system performance. 

Hazzard says her research with the working group has revealed important insights into how trust—and distrust—affect the regulation of human behaviour in the criminal justice system and society at large.

“Our research and forthcoming report are especially needed at a time when it is increasingly difficult to discern the veracity of the data that shape the lenses through which we see the world,” says Hazzard.

“It is by becoming acquainted with the multifaceted constitution of the word ‘trust,’” says Hazzard, “that we, as humans, can learn to adjust our expectations of digital infrastructures as needed and determine the conditions under which it is appropriate to invest our trust in said infrastructures.”

The working group on trust in human-ML interaction expects to publish a paper of their research, findings, and suggestions for further inquiry by the end of the summer of 2024. Stay tuned to SRI’s channels to be notified of the paper’s release: follow us on X/Twitter or LinkedIn, or subscribe to our monthly newsletter.

Want to learn more?


Browse stories by tag:

Related Posts

 
Previous
Previous

Schwartz Reisman Institute announces new faculty affiliates for 2024-25

Next
Next

All about Bill C-70, the Canadian government’s attempt to counter foreign interference