New SRI working group will explore the role of trust in human-ML interaction
Trust is a pivotal concept in the interactions between humans and machine learning (ML) systems. Do we trust ML, and if so why? Perhaps just as importantly, how can ML systems earn and maintain our trust?
These questions lie at the heart of a new working group convened by Beth Coleman, research lead at the Schwartz Reisman Institute for Technology and Society (SRI), and an associate professor at the University of Toronto’s Institute of Communication, Culture, Information and Technology and Faculty of Information.
The call for applications is now closed, and the established group comprises University of Toronto doctoral and postdoctoral researchers specializing in various topics relating to trust in human-ML interaction. Scholars from diverse academic backgrounds including, but not limited to, human-computer interaction (HCI), behavioral sciences, economics, psychology, legal studies, and engineering were encouraged to apply.
The group will embark on a comprehensive environmental survey that delves into how trust is conceptualized and operationalized across disciplines. Through the project, Coleman seeks to establish a diverse cohort of researchers in order to develop a deeper understanding of the role of trust in human-ML interaction, and collaboratively identify new approaches to understanding trust.
Working group sessions will take place from January to May 2024, with monthly meetings at SRI providing a platform for the group’s exploration and exchange of ideas.
What factors influence and guide our trust?
The significance of trust in the interactions between humans and ML systems cannot be overstated. Trust influences user adoption, ethical considerations, the societal impact of emerging technologies, and more.
The working group anticipates two key outcomes: the creation of a white paper offering a comparative international review of disciplinary approaches to trust, and the generation of new frameworks and insights to guide future interdisciplinary research and policymaking on trust in technology and society.
Working group members are encouraged to embrace the challenge with an open mind and a commitment to interdisciplinary engagement. The working group is home to individuals eager to learn about cross-disciplinary approaches, contribute insightful perspectives, and demonstrate a proactive attitude in both collaborative and individual research efforts.
How to apply — update: applications are now closed.
Application form: Interested candidates can apply to join the working group by completing an online form and including the following.
Curriculum vitae (CV).
Statement of interest (500 words), including:
Disciplinary background (e.g., law, science and technology studies, computer science).
Interest in the working group project.
Potential domain of inquiry related to the working group project, focusing on how it relates to trust in the context of your field.
Academic reference: Provide the contact name and email of a professor or professional you have worked with. Letters of reference are not required.
Writing sample: Submit a sample of your academic or professional writing.
Applications were accepted on a rolling basis until January 22, 2024, 11:59 PM Eastern Time. The call for applications is now closed. For any inquiries about the working group, please contact Beth Coleman at beth.coleman@utoronto.ca.
Want to learn more?
View the call for applications (PDF) for the SRI Working Group on Trust in Human-ML Interaction.
Learn more about Beth Coleman’s recent project on Octavia Butler and AI.
Explore SRI’s recent programming on how technology can benefit public good.