2022 SRI Graduate Workshop explores “Technologies of Trust”

 

How are advanced technologies shaping beliefs and truths in our daily lives, and what role does trust play in developing new technologies? At the 2022 SRI Graduate Workshop “Technologies of Trust,” a wide range of interdisciplinary scholarship explored these themes in relation to health sciences, philosophy, management, and education.


Artificial intelligence (AI) and machine learning are transforming society’s methods of generating, recording, and communicating information. How are advanced technologies fundamentally shaping beliefs and truths in our daily lives, and what role does trust play in developing and deploying new technologies?

Graduate fellows at the Schwartz Reisman Institute for Technology and Society (SRI) explored answers to these questions and more at the 2022 SRI 2022 Graduate Workshop, “Technologies of Trust,” hosted as part of the institute’s academic conference, Absolutely Interdisciplinary. Drawing on the multiple meanings in the workshop title, presenters discussed the role of technology in generating trust for users and institutions, as well as the important relationship between emerging technologies and trust, for individuals and society as a whole.

The workshop brought together interdisciplinary and critical scholarship from a wide range of fields, including medicine and health sciences, philosophy, management, and education. Throughout the presentations, it was insightful to see the pervasive influence of new technologies across the many institutions that organize society, including hospitals and nursing, the child welfare system, and classrooms.

 

Participants in the 2022 SRI Graduate Workshop’s morning session discuss how the use of technologies in healthcare impact trust. From left to right, top to bottom: moderator Vinyas Harish, and panelists Kim Crasta, Sai-Amrit Maharaj, Kamilah Ibrahim, Zihan (Ellis) Gao, Radhika Prabhune, Erina Moon.

 

New technologies can pose benefits and risks to trust

Several presentations focused on the wide range of potential trust benefits for new technologies, especially within the fields of medicine and science. In their analysis of communication between healthcare institutions, Kimberly Crasta and Sai-Amrit Maharaj (Translational Research Program, LMP, University of Toronto) noted that poor use of communication technologies can reduce efficacy of care for patients, and that the under-utilization of technology can have a direct impact on patient trust. Zihan (Ellis) Gao and Radhika Prabhune (Department of Laboratory Medicine and Pathobiology, University of Toronto) provided a survey of patient perspectives on the use of AI in healthcare, including the opportunities and challenges these innovations will meet in the context of patient trust. As Prabhune noted, burnout in healthcare is a systemic issue; to solve this, the field must find ways to expand beyond a small pool of primary-care practitioners. Alexander Brechalov (Donnelly Center for Cellular and Biomolecular Research, University of Toronto) demonstrated another innovative use of technology to support scientific research: a neural network-based algorithm that is able to recommend scientific journals and titles to researchers based on their paper’s abstract.

Describing the influence of digital platforms on users, Yisheng Li (Ted Rogers School of Management, Toronto Metropolitan University) and Alice Huang (Department of Philosophy, University of Toronto) both analyzed the role of social media to shape truth, notions of expertise, and meta-expertise, insights that helped workshop attendees to better understand the modalities of trust building in society at large. Li and Huang both drew on the concept of homophily—the tendency for people to be attracted to viewpoints similar to their own—to explain how trust can be shaped in positive and negative ways. As Li demonstrated, homophily can be exploited by algorithms to popularize conspiracy theories via online social networks, while Huang used a simulation model to demonstrate how homophily can enable agents to arrive at more accurate beliefs through forms of open communication.

Understanding the daily practices and processes through which algorithms and data systems operate help us to see how these systems can be empowering, but also disruptive, punitive, and disciplining. In their research on child welfare in the United States, Kamilah Ebrahim and Erina Moon (Faculty of Information, University of Toronto) provided a case study that highlighted how the pervasive use of algorithms to make critical decisions—such as assigning caseworkers, deciding on financial supports for foster parents, or managing issues of staffing and timelines—can adversely impact stakeholders. Caseworkers, parents, and children can end up devoid of agency and voice in this technology-guided process, resulting in a lack of appropriate resource distribution and unmet needs, unless the technologists who are responsible for designing these systems work closely with policy makers to assess their intended goals and outcomes.

Randeep Nota (OISE, University of Toronto) demonstrated similar issues with the use of online proctoring software in post-secondary education to detect plagiarism and monitor behavior during exams. The vast amounts of data generated through these types of interfaces are largely unregulated, with students required to provide consent in order to complete their coursework, while companies are left free to store, use, and distribute this data. The use of such technologies is inadequate, if not useless and potentially harmful, noted Nota, to solve issues in the education system concerning pedagogy and career outcomes for students.

Social contexts are essential for understanding trust

Many presentations highlighted the importance of data ownership as a key factor in understanding and ensuring trust. Private ownership of data and deregulation can compromise the privacy of users, limit the free will of owners to dictate terms of the use and dissemination of their information, and generate a lack of redressal mechanisms. International models of state regulation of data and technology systems were raised during the workshop’s discussion period, to emphasise the need for stricter laws of privacy, transparency of the data pipeline, and deliberating on single-use data.

An overarching analysis provided by the workshop’s participants was that trust does not exist in vacuum: the historical and social-political context within which technology is used matters. As Kamiliah Ebrahim (Faculty of Information, University of Toronto) noted in her presentation on tracking technologies developed during the COVID-19 pandemic, trust is not a uniform construct or singular narrative that can be designed for, but a dynamic principle that depends on who is affected. Ibrahim demonstrated how use of facial recognition technologies in tracking apps developed by government agencies were rejected by racialized populations due to the technology’s prior use in predictive policing. Such precedents serve as an indicator of lack of trust, but also demonstrate how larger issues that compromise trust in society—such as racism, biased policing, and surveillance—need to be addressed before certain technologies can be used at a large scale without negative implications.

As essential public services, the question of how to navigate the deployment of algorithms in healthcare and education contexts continues to be a challenging issue. To unlock the potential benefits of these technologies, we need to ask whose voices are included—and not included—in designing these systems, in what ways do stakeholders benefit from their application, and how these technologies address, or even potentially reproduce, historically oppressive relations of race, gender, class, and other power structures. While new technologies may offer great potential for developing a more egalitarian and trustworthy society, we need to be wary of the dangers of deterministic approaches towards these tools, and work carefully to ensure a wide range of perspectives from across society are considered in their development and use.

Watch the full sessions:


About the author

Asmita Bhutani is a PhD student at the Ontario Institute for Studies in Education (OISE) at the University of Toronto, and Schwartz Reisman 2021–23 Graduate Fellow who specializes in anti-racist feminist political economy and workplace studies. Examining data-driven AI chains as socio-technical systems, her work focuses on the gendered aspects and labour processes of data annotation platforms.


Browse stories by tag:

Related Posts

 
Previous
Previous

New AI Audit Challenge seeks designs for detecting bias in AI systems

Next
Next

Absolutely Interdisciplinary 2022 explores new solutions for a changing technological landscape