Commentary, Solutions Sarah Rosa Commentary, Solutions Sarah Rosa

What might the Canadian AI Safety Institute look like? Reflections on an emerging national AI safety regime

In April 2024, the Government of Canada pledged $2.4bn toward AI in its annual budget, including $50m for a new AI Safety Institute. What scope, expertise, and authority will the new institute need to achieve its full potential? We examine the early approaches of similar institutes in the UK, US, and EU.

Read More
Commentary, Solutions Ella Lim Commentary, Solutions Ella Lim

From mourning to machine: Griefbots, human dignity, and AI regulation

Griefbots are artificial intelligence programs designed to mimic deceased individuals by using their digital footprint. Griefbots raise significant concerns about data collection and implications to human dignity. This article explores the digital afterlife industry and the ethical and legal challenges it presents, including a consideration of health, privacy, and property laws in Canada.

Read More
Commentary, Solutions Ella Lim Commentary, Solutions Ella Lim

All about Bill C-70, the Canadian government’s attempt to counter foreign interference

Although foreign interference did not impact the results of Canadian elections in 2019 and 2021, it ‘stained’ the electoral process, undermining public confidence in Canada’s democratic institutions. What measures does Bill C-70 (“An Act respecting countering foreign interference”) take to bolster Canadian confidence in elections? And how might it apply to the use of AI in our elections?

Read More
Commentary David Baldridge, Beth Coleman, and Alicia Demanuele Commentary David Baldridge, Beth Coleman, and Alicia Demanuele

The terminology of AI regulation: Ensuring “safety” and building “trust”

We hear certain terminology used frequently in efforts to regulate artificial intelligence. But what do we mean when we talk about “safety” and “trust”? Are advanced artificial intelligence (AI) systems a threat to our sense of safety and security? Can we trust AI systems to perform increasingly critical roles in society? Precise and useful understandings of these terms across diverse contexts are a crucial step toward effective policymaking.

Read More
Commentary David Baldridge, Michael Beauvais, Alicia Demanuele, and Leslie Regan Shade Commentary David Baldridge, Michael Beauvais, Alicia Demanuele, and Leslie Regan Shade

Five key elements of Canada’s new Online Harms Act

Canada’s federal government has released the latest draft of its online harms bill, otherwise known as Bill C-63. Below, Schwartz Reisman researchers take us on a tour through key aspects of the bill, including its taxonomy of harms, new expectations and requirements for social media platforms, and new kinds of protections for children and youth online.

Read More
Commentary David Baldridge, Beth Coleman, and Jamie Amarat Sandhu Commentary David Baldridge, Beth Coleman, and Jamie Amarat Sandhu

The terminology of AI regulation: Preventing “harm” and mitigating “risk”

We hear certain terminology used frequently in efforts to regulate artificial intelligence. But what do we mean when we talk about “harm, “risk,” “safety,” and “trust”? SRI experts take us through the implications of the words we use in the rules we create.

Read More
Commentary David Baldridge and Jamie Amarat Sandhu Commentary David Baldridge and Jamie Amarat Sandhu

Redefining AI governance: A global push for safer technology

SRI policy researchers David Baldridge and Jamie Amarat Sandhu trace the landscape of recent global AI safety initiatives—from Bletchley to Hiroshima and beyond—to see how governments and public policy experts are envisioning new ways of governing AI as rapid advancements in the technology continue to present challenges to policymakers.

Read More
Commentary David Baldridge and Jamie Amarat Sandhu Commentary David Baldridge and Jamie Amarat Sandhu

Uncovering gaps in Canada’s Voluntary Code of Conduct for generative AI

Want to learn more about Canada’s Voluntary Code of Conduct on the Responsible Development and Management of Advanced Generative AI Systems? SRI Policy Researchers David Baldridge and Jamie Sandhu comment on the Code’s characteristics and shortcomings after its recent release following a summer of significant developments concerning generative AI.

Read More
Commentary Davide Gentile Commentary Davide Gentile

Exploring user interaction challenges with large language models

We’re using AI assistants and large language models everywhere in our daily lives. But what constitutes this interaction between person and machine? SRI Graduate Affiliate Davide Gentile writes about the virtues and pitfalls of user experience, highlighting some ways in which the human-computer interaction could be made clearer, more efficient, more trustworthy, and overall a better experience—for everyone.

Read More