Back to All Events

Toronto Public Tech Workshop 2023

  • Campbell Conference Facility 1 Devonshire Place Toronto, ON, M5S 3K7 Canada (map)

The Schwartz Reisman Institute for Technology and Society and the Munk School of Global Affairs & Public Policy at the University of Toronto are pleased to host the 2023 Toronto Public Tech Workshop, with researchers from a wide range of disciplines presenting new work that explores the use of technology for public purposes.

As technology becomes an integral part of our lives, its impact on society is undeniable. From healthcare to education, finance to transportation, technological innovations have transformed the way we live and work. However, the rapid pace of this innovation also raises novel concerns about privacy, security, and equity. There is a pressing need to explore and propose solutions to these challenges through research, policy, regulation, partnerships, and collaborations across various academic disciplines and stakeholders.

This workshop aims to address these challenges and offer new insights and solutions by bringing together diverse perspectives and expertise from a wide range of backgrounds. Presenters will share and discuss ideas on how to leverage new and existing technologies for public purposes, integrate policy and governance considerations, and build successful partnerships that engage with democratic institutions and public values.

Speakers:

  • Peter Loewen, Munk School of Global Affairs & Public Policy, University of Toronto; associate director, Schwartz Reisman Institute for Technology and Society

  • Somayeh Amini and Shveta Bhasker, Institute of Health Policy, Management and Evaluation, Dalla Lana School of Public Health, University of Toronto

  • Onur Bakiner, Political Science Department, Seattle University

  • LK Bertram, Department of History, University of Toronto

  • Shion Guha, Faculty of Information, University of Toronto

  • Kelly McConvey, Faculty of Information, University of Toronto

  • Lynette Ong, Munk School of Global Affairs, University of Toronto

  • Yan Shvartzshnaider, Lassonde School of Engineering, York University

Venue:

Campbell Conference Facility, Munk School of Global Affairs & Public Policy, University of Toronto. 1 Devonshire Place, Toronto.

Registration:

Registration is available via Eventbrite and is open to the public. $20.00 plus HST and service fees.

For questions or accessibility accommodations, please contact sri.events@utoronto.ca.


Workshop Schedule:

8:30 AM | Registration and continental breakfast 

9:00 AM | Opening remarks, Peter Loewen (Munk School of Global Affairs, University of Toronto)

9:10 AM | Shion Guha (Faculty of Information, University of Toronto), “Rethinking ‘risk’ in algorithmic systems through a computational narrative analysis of casenotes in child-welfare” 

Risk assessment algorithms are being adopted by public sector agencies to make high-stakes decisions about human lives. Algorithms model “risk” based on individual client characteristics to identify clients most in need. However, this understanding of risk is primarily based on easily quantifiable risk factors that present an incomplete and biased perspective of clients. We conducted computational narrative analysis of child-welfare casenotes and draw attention to deeper systemic risk factors that are hard to quantify but directly impact families and street-level decision-making. We found that beyond individual risk factors, the system itself poses a significant amount of risk where parents are over-surveilled by caseworkers and lack agency in decision-making. We also problematize the notion of risk as a static construct by highlighting the temporality and mediating effects of different risk, protective, systemic, and procedural factors. Finally, we draw caution against using casenotes in NLP-based systems by unpacking their limitations and biases embedded within them.

10:10 AM | Lynette Ong (Munk School of Global Affairs, University of Toronto), “Authoritarian statecraft in the digital age: Online public opinion management in China” 

The digital age has afforded autocrats new technologies of control, allowing it to co-opt, pre-empt and repress dissent. But, in what ways has it altered the way autocratic states conduct their statecraft and reconfigured the contours of state power? In this paper, we address these important research questions by examining how the Chinese state manages online expression of public opinions. Public opinions are a double-edged sword in autocratic setting. While they allow the rulers to gauge public sentiments and become more responsive to citizens’ demands, they can also spiral out of control and destabilize regimes. Thus, the management of online public opinions provides a critical window into understanding how the state conducts its statecraft in the digital age. Based on an analysis of more than 3,000 public procurement documents, we found that the Chinese state has outsourced various functions of public opinion management to private and state-owned corporations. These companies provide technical expertise that allow the state to harness big data and artificial intelligence to manage the expression of public opinions online. In-depth analysis of the for-profit firms to which the services have been outsourced and their service functions further reveals the nature of state-business relations and social control in China. This paper draws broader implications for the new performance of statecraft in the digital age, one that is based on state-business collaboration in autocratic China.

11:10 AM | Onur Bakiner (Political Science Department, Seattle University), “Pluralistic sociotechnical imaginaries in artificial intelligence law: The case of the European Union’s AI regulation” 

This paper asks how lawmakers and other stakeholders envision the potential benefits and challenges arising from artificial intelligence (AI). A close reading of the European Union’s AI Regulation, a bill proposed by the European Commission in April 2021, and of 302 response papers submitted by NGOs, businesses and business associations, trade unions, academics, public authorities, and EU citizens, shows that pluralistic sociotechnical imaginaries contest: (1) the essential characteristics of technology as they relate to social and political problems, and law; (2) whether, how and how much law can enable, direct or constrain scientific and technological developments; and (3) the degree to which law is or should intervene into scientific and technological controversies. The feedback from stakeholders reveals major disagreements with the lawmakers in terms of how the relevant characteristics of AI should influence legal regulation, what the desired law should look like, and whether and how the law should intervene into expert debates in AI. What is more, different types of stakeholders diverge considerably in what they problematize and how they do so.

12:00 PM | Lunch  

1:00 PM | LK Bertram (Department of History, University of Toronto), “Instascholars: Making good data go viral in the disinformation age” 

How do we make accurate data go viral? Outside of my work as an associate professor at the University of Toronto, I am also an anonymous “instaprof” who runs a large-scale open history class on Instagram. My work is driven by this question and is the focus of a new SSHRC-funded project on high-yield knowledge mobilization strategies for video-based social media algorithms. My paper offers an overview of some of the digital and algorithmic literacy that scholars need to produce high-engagement or “viral” content, endemic issues with bias, censorship, and safety on video-based social media platforms, and the opportunities for university communities to create new, steady streams of accessible, accurate content for these big digital publics. 

Amid the early rise of the COVID-19 pandemic the World Health Organization argued that it was also facing a twin “infodemic,” or the widespread public distribution of “false or misleading information in digital environments.” While social media platforms have shouldered much of the blame for the infodemic, the WHO cautions us that the success of both misinformation and disinformation campaigns have only been made possible by a corresponding vacuum of quality data online. Indeed, most scientists and scholars largely avoid social media platforms. Though some have developed a presence on text-based platforms like Twitter, very few circulate research on the biggest video-based platforms like TikTok and Instagram, in spite of their intense popularity. This absence is problematic. A 2021 study revealed that 86% of North Americans turn to video-based content on social media as a news source. The collective academic avoidance of these massive audiences and their unchecked, largely unchallenged growth have made some of the biggest digital publics in the world easy prey for misinformation and disinformation campaigns with troubling agendas, from anti-transgender legislation to curriculum bans on topics like slavery. 

Some of the scholarly avoidance of video-based platforms reflects the disproportionate risks of harassment and violence faced by female, queer, and BIPOC scholars who speak out on social media. In addition to being an unwelcoming, potentially hostile space, many scholars in the humanities and social sciences also often cannot afford the time to build larger-scale public outreach campaigns. Those who do still often do so as side projects, as I had, often receiving less or no external support or recognition in academic circles. As Simone Lässig explains, monographs remain the “gold standard” for many humanities and social science scholars, while digital experimentation, content, and mobilization continues to play a far more “subordinate role” in how historians prioritize outputs. Missing, Noiret argues, are a serious new series of conversations about how massive technological shifts require historians to also reconsider our responsibilities and relationships to the digital public.

Rather than simply providing an overview of the problem, this paper also offers attendees a discussion of some of the future possibilities and directions that can support stronger public access to academic research through video-based social medial platforms. It describes the benefits of stronger algorithmic literacy campaigns for academics and ways to prioritize and defend equity, safety, and sustainability in an unequal digital landscape. It closes with a step-by-step introduction to the five qualities of high-engagement (viral) content for attendees who are interested in building their own knowledge mobilization campaigns for TikTok and Instagram.

2:00 PM | Kelly McConvey (Faculty of Information, University of Toronto), “A human-centered review of algorithms in decision-making in higher education” 

The use of algorithms for decision-making in higher education is steadily growing, promising cost-savings to institutions and personalized service for students, but also raising ethical challenges around surveillance, fairness, and interpretation of data. To address the lack of systematic understanding of how these algorithms are currently designed, we reviewed an extensive corpus of papers proposing algorithms for decision-making in higher education. We categorized them based on input data, computational method, and target outcome, and then investigated the interrelations of these factors with the application of human-centered lenses: theoretical, participatory, or speculative design. We found that the models are trending towards deep learning, increased use of student personal data and protected attributes, with the target scope expanding towards automated decisions. However, despite the associated decrease in interpretability and explainability, current development predominantly fails to incorporate human-centered lenses. We discuss the challenges with these trends and advocate for a human-centered approach.

3:00 PM | Yan Shvartzshnaider (Lassonde School of Engineering, York University), “Privacy governance not included: Analysis of third parties in learning management systems” 

The tumultuous COVID-19 pandemic significantly impacted higher education. The rapid adoption of online remote learning platforms resulted in increased surveillance practices and lack of transparency. While this transition enabled schools to remain open in a global pandemic, it exposed them to greater privacy challenges and threats. While recent efforts identified numerous specific educational privacy concerns involving major learning management systems, the challenges and uncertainties surrounding the use of third-party add-ons for learning management system (LMS) platforms remain relatively under-examined.

LMS add-ons—also known as plug-ins or learning tools interoperability (LTIs)—provide additional capabilities to existing LMS platforms. Many existing LMS platforms allow third-party add-ons to access the platform’s data to provide additional services. For example, the Turnitin plagiarism detection service has add-ons for all major LMS platforms, including Canvas, Moodle, and Blackboard. Importantly, the LMS platforms’ privacy policies often do not cover third-party add-ons and claim no responsibility for the privacy practices of these third parties.

A recent case before the Office of the Information and Privacy Commissioner (IPC) of Ontario, Canada showed that third-party add-ons inadvertently collected and shared student information (MC18-17 2022). Consequently, the IPC determined that the “[school board] does not have reasonable contractual and oversight measures in place to ensure the privacy and security of the personal information of its students.”

The IPC decision is indicative of the current status quo. When it comes to LMS add-ons, many universities follow informal practices that are not written down as an explicit policy. Usually, the burden falls on educational IT support staff to meet the needs of diverse stakeholder groups, such as educational technology practitioners, educators, and students. The lack of transparency behind many of these services adds to the challenge of understanding privacy and intellectual property implications with respect to third-party plug-ins in LMSs.

Motivated by these concerns and questions, we examine the use and governance of LMS add-ons at universities in the U.S. and Canada. Specifically, this paper explores third-party access to student data via add-ons, as well as the governance of third-party data sharing, via a multi-method design that draws on surveys, interviews, institutional policy analysis, and content analysis of LMS documentation. We document disparities in privacy practices and governance, and the nature of add-ons adoption processes. We also argue for greater transparency and oversight, drawing on exemplary practices identified via our empirical study.

In our study, we conduct interviews with data governance officers at 14 additional US universities, providing greater depth into the governance challenges associated with assessment and instructional LMS add-ons. A total of 25 professionals across these 14 universities discuss decision-making processes and frequent challenges, including coordination in adoption and evaluation of add-ons and value differences dividing preferences across different administrative units on their campus, typically: IT, the Center For Innovation In Teaching & Learning (CITL), Provost’s office staff, faculty governance, and legal counsels. These results provide insight into who within higher education is responsible for decision-making about LMS data and third-party data flows via add-ons, including where decision-making processes break down and what stakeholders’ interests may better align with student privacy preferences.

4:00 PM | Somayeh Amini and Shveta Bhasker (Institute of Health Policy, Management and Evaluation, University of Toronto), “Unlocking the power of EHRs: Harnessing unstructured data for machine learning-based outcome predictions” 

Integrating electronic health records (EHRs) with machine learning (ML) models has become imperative in examining patient outcomes due to their vast amounts of clinical data. However, critical information regarding social and behavioral factors that affect health, such as mental health complexities, is often recorded in unstructured clinical notes, hindering its accessibility. This has resulted in an over-reliance on clinical data in current EHR-based research, leading to disparities in health outcomes. This study aims to evaluate the impact of incorporating patient- specific context from unstructured EHR data on the accuracy and stability of ML algorithms for predicting mortality. This study analyzed a sample of 1,058 patient records from the Medical Information Mart for Intensive Care III (MIMIC III) database to identify mental health disorders among adults admitted to intensive care units between 2001 and 2012. All clinical notes from each patient’s most recent ICU stay were evaluated to acquire a comprehensive understanding of their mental health issues based on unstructured data. We examined a variety of machine learning classifiers, including Logistic Regression, kernel-based Support Vector Machines, decision-tree-based Random Forest, XGBoost, ExtraTrees, and sample-based KNearest Neighbors. Results from the study confirmed the significance of incorporating patient-specific information into prediction models, leading to a notable improvement in the discriminatory power and robustness of the ML algorithms. In addition, the findings underline the importance of considering non-clinical factors related to a patient’s daily life and clinical characteristics when predicting patient outcomes. These results significantly improve ML in clinical decision support and patient outcome predictions.

5:00 PM | Cocktail reception


About the speakers

Somayeh Amini and Shveta Bhasker are Master of Health Informatics (MHI) students at the Institute of Health Policy, Management and Evaluation, at the University of Toronto’s Dalla Lana School of Public Health. Somayeh Amini holds a PharmD degree and, as a pharmacist, has worked in different management and leadership positions in community pharmacies and pharmaceutical companies in Iran, her home country. Amini joined the MHI program to pursue her dream of harnessing technology to provide patients with better care and experience, and is passionate about deploying artificial intelligence and machine learning to help cancer patients, especially those in hospice/end-of-life care. She believes health informatics can provide the knowledge and skills to improve these patients’ care and outcomes, including survival, quality of life, and treatment costs. Shveta Bhasker has published research in infectious diseases, and has a background of professional experience in public and global health. She is interested in the intersection between health informatics and social determinants of health while incorporating an equity, diversity, and inclusion mindset.

Onur Bakiner is an associate professor of political science at Seattle University. His research and teaching interests include transitional justice, human rights, judicial politics, and technology & society, particularly in Latin America and the Middle East. Bakiner’s current research addresses the impact of artificial intelligence technologies on human rights. His book Truth Commissions: Memory, Power, and Legitimacy (University of Pennsylvania Press, 2015) was awarded the Best Book Award by the Human Rights Section of the American Political Science Association in 2017. His articles have been published in the Journal of Comparative Politics, Annual Review of Law & Social Science, AI & Ethics, Negotiation Journal, Civil Wars, Journal of Law and Courts, International Journal of Transitional Justice, Memory Studies, and Turkish Studies. Bakiner has received funding from the German Academic Exchange Service, Social Sciences & Humanities Research Council of Canada, the Center for Business Ethics at Seattle University, and the Initiative in Ethics and Transformative Technologies at Seattle University.

LK Bertram is a faculty member in the Department of History at the University of Toronto specializing in the delivery of critical historical data through social media algorithms and the history of migration, gender, sexuality, and colonialism in the 19th century North American West. She is the author of The Viking Immigrants: Icelandic North Americans (UTP 2020, Winner: CHA Clio Prize), and is currently finishing a book on the financial lives of sex workers in the 19th century West. Bertram's newest work focuses on how scholars can more effectively combat digital disinformation campaigns. As the anonymous curator of a large-scale public history campaign that hit 9 million views, she focuses on high-yield data packaging strategies for larger-scale publics using video-based algorithms. This new SSHRC-funded project asks: “How do we make good data go viral in the disinformation age?” Follow the project at @socialforscholars.

Shion Guha is an assistant professor in the Faculty of Information with a cross-appointment in the Department of Computer Science at University of Toronto, and a 2023–24 Schwartz Reisman Faculty Fellow. His current research interests cut across human-computer interaction, data science, and public policy. He is a co-author of the best-selling textbook Human-Centered Data Science: An Introduction (MIT Press, 2022), which combines technical methodologies with interpretive inquiry in order to address biases and structural inequalities in socio-technical systems. Guha is very interested in understanding how algorithmic decision making processes are designed, implemented and evaluated in public services. In doing so, he often works with marginalized and vulnerable populations such as with the child welfare, criminal justice and healthcare systems etc. His work has been supported by grants from National Science Foundation, Facebook, Parkview Foundation, American Political Science Association and featured in the media (Newsweek, Associated Press, ACLU, ABC, NBC, Gizmodo). He received a MS from the Indian Statistical Institute in 2010 and a PhD from Cornell University in 2016. 

Peter Loewen is the Director of the Munk School of Global Affairs & Public Policy, a professor in the Department of Political Science and the Munk School at the University of Toronto, an associate director at the Schwartz Reisman Institute for Technology and Society, director of the Policy, Elections, and Representation Lab (PEARL), a senior fellow at Massey College, and a fellow with the Public Policy Forum. Loewen received his BA from Mount Allison University (2002) and his PhD from l’Université de Montréal (2008). Loewen’s work has been published in Proceedings of the National Academy of Sciences, Nature Medicine, Nature Human Behaviour, American Political Science Review, American Journal of Political Science, Journal of Politics, British Journal of Political Science, Political Research Quarterly, Transactions of the Royal Society B, and Journal of Economic Behavior and Organization, and other journals. He has edited four books and is a regular contributor to the media, including the New York Times, Washington Post, Globe & Mail, Toronto Star, and National Post.

Kelly McConvey is a PhD student at the Faculty of Information at the University of Toronto and a 2023–2024 Schwartz Reisman Institute graduate fellow. McConvey’s research focuses on human-centered data science in the public sector, and specifically, the use of algorithms in higher education. She is advised by Shion Guha and is a member of the Human-Centered Data Science Lab. McConvey holds a Masters of Management in Artificial Intelligence from the Smith School of Business at Queen's University.

Lynette H. Ong is a professor in the Department of Political Science and the Munk School of Global Affairs and Public Policy at the University of Toronto. She is an expert on China, having conducted on-the-ground research in the country since the late 1990s. In addition, she has also published on the broader Indo-Pacific region, including Southeast Asia and India. Her research interests lie at the intersection of authoritarianism, contentious politics, and development. She has delivered expert testimonies before the US Congress and the Canadian House of Commons, and frequently offers expert commentaries to international and Canadian media. She recently completed a decade-long study on repression and state power in China, Outsourcing Repression: Everyday State Power in Contemporary China (Oxford University Press, 2022), which was covered in the EconomistBBC World Service, and the Wall Street Journal. Her academic publications have appeared in Perspectives on PoliticsJournal of Comparative PoliticsChina QuarterlyChina JournalJournal of Contemporary Asia, among others. Her opinion pieces have appeared in Foreign AffairsForeign PolicyWashington PostLA TimesSouth China Morning PostEast Asia Forum, and the Globe and Mail. Ong is a 2022–24 Schwartz Reisman Faculty Fellow, and her research has been funded by the Social Science and Humanities Research Council, Connaught, Chiang Ching-Kuo Foundation, and the Association of Asian Studies. She has held the position of director of the Munk School China Initiative, acting director of the Contemporary Asian Studies Program, and director of East Asia Seminar Series at the Asian Institute for many years.

Yan Shvartzshnaider is an assistant professor in the Department of Electrical Engineering and Computer Science, Lassonde School of Engineering at York University. He leads the Privacy Rhythm research lab that develops privacy-enhancing methodologies and tools to help incorporate a socially meaningful conception of privacy which meets peoples' expectations and is ethically defensible. Prior to York, he was an assistant professor and faculty fellow in the Courant Institute of Mathematical Sciences at New York University, affiliated with Analysis of Computer Systems (ACSys) and Open Networks and Big Data Lab groups. He was also a visiting research associate at Digital Life Initiative in Cornell Tech and at the Center for Information Technology Policy.


About the Schwartz Reisman Institute

Located at the University of Toronto, the Schwartz Reisman Institute for Technology and Society’s mission is to deepen our knowledge of technologies, societies, and what it means to be human by integrating research across traditional boundaries and building human-centred solutions that really make a difference. The integrative research we conduct rethinks technology’s role in society, the contemporary needs of human communities, and the systems that govern them. We’re investigating how best to align technology with human values and deploy it accordingly. The human-centred solutions we build are actionable and practical, highlighting the potential of emerging technologies to serve the public good while protecting citizens and societies from their misuse. We want to make sure powerful technologies truly make the world a better place—for everyone.

Previous
Previous
April 24

Special event: Generative AI and the future of education

Next
Next
June 20

SRI Graduate Workshop: The Limits of AI