Events, Commentary Schwartz Reisman Institute Events, Commentary Schwartz Reisman Institute

The big picture of dangerous capability evaluations: David Duvenaud at the Seminar Series

How can we stay in control when AI systems surpass human intelligence? In a recent SRI Seminar, Schwartz Reisman Chair David Duvenaud explored the frontier of AI safety, alignment, and governance, introducing new research on “dangerous capability” evaluations and control protocols designed to detect when AI models become too powerful to oversee.

Read More
Commentary, Solutions Schwartz Reisman Institute Commentary, Solutions Schwartz Reisman Institute

Data privacy and governance for Canadian innovation: SRI responds to Canada’s implementation of global privacy certifications

The Schwartz Reisman Institute for Technology and Society responds to Canada’s consultation on global privacy certifications, outlining how CBPR and PRP can strengthen data protection, build public trust, and drive innovation in the digital economy.

Read More
Commentary, Solutions Sarah Rosa Commentary, Solutions Sarah Rosa

What might the Canadian AI Safety Institute look like? Reflections on an emerging national AI safety regime

In April 2024, the Government of Canada pledged $2.4bn toward AI in its annual budget, including $50m for a new AI Safety Institute. What scope, expertise, and authority will the new institute need to achieve its full potential? We examine the early approaches of similar institutes in the UK, US, and EU.

Read More
Commentary, Solutions Ella Lim Commentary, Solutions Ella Lim

From mourning to machine: Griefbots, human dignity, and AI regulation

Griefbots are artificial intelligence programs designed to mimic deceased individuals by using their digital footprint. Griefbots raise significant concerns about data collection and implications to human dignity. This article explores the digital afterlife industry and the ethical and legal challenges it presents, including a consideration of health, privacy, and property laws in Canada.

Read More
Commentary, Solutions Ella Lim Commentary, Solutions Ella Lim

All about Bill C-70, the Canadian government’s attempt to counter foreign interference

Although foreign interference did not impact the results of Canadian elections in 2019 and 2021, it ‘stained’ the electoral process, undermining public confidence in Canada’s democratic institutions. What measures does Bill C-70 (“An Act respecting countering foreign interference”) take to bolster Canadian confidence in elections? And how might it apply to the use of AI in our elections?

Read More
Commentary David Baldridge, Beth Coleman, and Alicia Demanuele Commentary David Baldridge, Beth Coleman, and Alicia Demanuele

The terminology of AI regulation: Ensuring “safety” and building “trust”

We hear certain terminology used frequently in efforts to regulate artificial intelligence. But what do we mean when we talk about “safety” and “trust”? Are advanced artificial intelligence (AI) systems a threat to our sense of safety and security? Can we trust AI systems to perform increasingly critical roles in society? Precise and useful understandings of these terms across diverse contexts are a crucial step toward effective policymaking.

Read More
Commentary David Baldridge, Michael Beauvais, Alicia Demanuele, and Leslie Regan Shade Commentary David Baldridge, Michael Beauvais, Alicia Demanuele, and Leslie Regan Shade

Five key elements of Canada’s new Online Harms Act

Canada’s federal government has released the latest draft of its online harms bill, otherwise known as Bill C-63. Below, Schwartz Reisman researchers take us on a tour through key aspects of the bill, including its taxonomy of harms, new expectations and requirements for social media platforms, and new kinds of protections for children and youth online.

Read More
Commentary David Baldridge, Beth Coleman, and Jamie Amarat Sandhu Commentary David Baldridge, Beth Coleman, and Jamie Amarat Sandhu

The terminology of AI regulation: Preventing “harm” and mitigating “risk”

We hear certain terminology used frequently in efforts to regulate artificial intelligence. But what do we mean when we talk about “harm, “risk,” “safety,” and “trust”? SRI experts take us through the implications of the words we use in the rules we create.

Read More