Shedding some light on the SRI summer research assistant program
For the third consecutive year, the Schwartz Reisman Institute of Technology and Society opened its doors to a select group of Juris Doctor (JD) students through its summer Research Assistant (RA) program. Learn more about this year's research projects and how our RA partnership with the Future of Law Lab has opened new insights and experiences for students interested in AI governance.
What might the Canadian AI Safety Institute look like? Reflections on an emerging national AI safety regime
In April of 2024, the Government of Canada pledged $2.4 billion toward artificial intelligence (AI) in its annual budget, including $50 million earmarked for a new AI Safety Institute. What scope, expertise, and authority will the recently-announced Canadian AI Safety Institute likely need in order to achieve its full potential? We examine the early approaches of AI safety institutes in the UK, the US, and the EU.
From mourning to machine: Griefbots, human dignity, and AI regulation
Griefbots are artificial intelligence programs designed to mimic deceased individuals by using their digital footprint. Griefbots raise significant concerns about data collection and implications to human dignity. This article explores the digital afterlife industry and the ethical and legal challenges it presents, including a consideration of health, privacy, and property laws in Canada.
The smart way to run smart cities: New report explores data governance and trusted data sharing in Toronto
A new report from SRI Research Lead Beth Coleman, SRI Graduate Fellow Madison Mackley, and collaborators explores questions such as: How can we facilitate data-sharing across divisions to improve public policy and service delivery? What are the risks of data-sharing, how can we mitigate those risks, and what are the potential benefits of doing it right?
Harming virtuously? Value alignment for harmful AI
The field of AI safety emphasizes that systems be aligned with human values, often stating AI should “do no harm.” But lethal autonomous systems used for firearms and drones are already harming people. How can we address the reality of purposely harmful AI systems? SRI Graduate Fellow Michael Zhang writes about a panel of experts exploring this topic.
SRI working group investigating the concept of trust from across disciplinary perspectives
Can we trust the behaviours, predictions, and pronouncements of the advanced artificial intelligence (AI) systems that are seemingly everywhere in our lives? This question is being explored using a multidisciplinary approach by a working group led by SRI Research Lead Beth Coleman. Learn more about the group members and what they’re working on.
All about Bill C-70, the Canadian government’s attempt to counter foreign interference
Although foreign interference did not impact the results of Canadian elections in 2019 and 2021, it ‘stained’ the electoral process, undermining public confidence in Canada’s democratic institutions. What measures does Bill C-70 (“An Act respecting countering foreign interference”) take to bolster Canadian confidence in elections? And how might it apply to the use of AI in our elections?
The terminology of AI regulation: Ensuring “safety” and building “trust”
We hear certain terminology used frequently in efforts to regulate artificial intelligence. But what do we mean when we talk about “safety” and “trust”? Are advanced artificial intelligence (AI) systems a threat to our sense of safety and security? Can we trust AI systems to perform increasingly critical roles in society? Precise and useful understandings of these terms across diverse contexts are a crucial step toward effective policymaking.
New SRI/PEARL survey now published, reveals worldwide public opinion about AI
A new report report shares findings on opinions about artificial intelligence (AI) in 21 countries. GPO-AI reveals varying, diverse and region-specific attitudes about the use of artificial intelligence, and topics of focus in the survey include job loss, deepfakes, and state regulation. The project was led by SRI Associate Director Peter Loewen, and features contributions from SRI Graduate Fellow Blake Lee-Whiting.
Absolutely Interdisciplinary 2024 fosters innovation and collaboration
At SRI’s annual academic conference, leading researchers from diverse fields came together to tackle the complexities of AI alignment and how to better understand the social impacts of data-driven technologies. 28 distinguished speakers presented new approaches and ideas to better understand how these technologies are impacting our world.
SRI/PEARL joint project explores global public opinion on AI with contributions to 2024 Stanford AI Index
The Stanford Institute for Human-Centered Artificial Intelligence (HAI) has released their 7th annual AI Index, and this year’s edition includes data from a joint project by SRI and the Policy, Elections and Representation Lab (PEARL) at the Munk School of Global Affairs & Public Policy. The Stanford Index is considered one of the most trusted and widely-read indices about the state of AI in the world.
Global group of experts advises on concrete steps towards a robust AI certification ecosystem
When new technologies enter the world, they must earn trust. How can we create trust in artificial intelligence? A new report from the Certification Working Group (CWG) explores the necessary elements of an ecosystem that can deliver effective certification to support AI that is responsible, trustworthy, ethical, and fair.