What might the Canadian AI Safety Institute look like? Reflections on an emerging national AI safety regime
In April 2024, the Government of Canada pledged $2.4bn toward AI in its annual budget, including $50m for a new AI Safety Institute. What scope, expertise, and authority will the new institute need to achieve its full potential? We examine the early approaches of similar institutes in the UK, US, and EU.
From mourning to machine: Griefbots, human dignity, and AI regulation
Griefbots are artificial intelligence programs designed to mimic deceased individuals by using their digital footprint. Griefbots raise significant concerns about data collection and implications to human dignity. This article explores the digital afterlife industry and the ethical and legal challenges it presents, including a consideration of health, privacy, and property laws in Canada.
All about Bill C-70, the Canadian government’s attempt to counter foreign interference
Although foreign interference did not impact the results of Canadian elections in 2019 and 2021, it ‘stained’ the electoral process, undermining public confidence in Canada’s democratic institutions. What measures does Bill C-70 (“An Act respecting countering foreign interference”) take to bolster Canadian confidence in elections? And how might it apply to the use of AI in our elections?
The terminology of AI regulation: Ensuring “safety” and building “trust”
We hear certain terminology used frequently in efforts to regulate artificial intelligence. But what do we mean when we talk about “safety” and “trust”? Are advanced artificial intelligence (AI) systems a threat to our sense of safety and security? Can we trust AI systems to perform increasingly critical roles in society? Precise and useful understandings of these terms across diverse contexts are a crucial step toward effective policymaking.
Five key elements of Canada’s new Online Harms Act
Canada’s federal government has released the latest draft of its online harms bill, otherwise known as Bill C-63. Below, Schwartz Reisman researchers take us on a tour through key aspects of the bill, including its taxonomy of harms, new expectations and requirements for social media platforms, and new kinds of protections for children and youth online.
The terminology of AI regulation: Preventing “harm” and mitigating “risk”
We hear certain terminology used frequently in efforts to regulate artificial intelligence. But what do we mean when we talk about “harm, “risk,” “safety,” and “trust”? SRI experts take us through the implications of the words we use in the rules we create.
What are LLMs and generative AI? A beginner’s guide to the technology turning heads
What is generative AI? How do large language models work? SRI Policy Researcher Jamie Sandhu lays the groundwork for understanding LLMs and other generative AI tools as they increasingly permeate our daily interactions.
Redefining AI governance: A global push for safer technology
SRI policy researchers David Baldridge and Jamie Amarat Sandhu trace the landscape of recent global AI safety initiatives—from Bletchley to Hiroshima and beyond—to see how governments and public policy experts are envisioning new ways of governing AI as rapid advancements in the technology continue to present challenges to policymakers.
To guarantee our rights, Canada’s privacy legislation must protect our biometric data
Amidst today’s broad social impacts of data, we must pay specific attention to the risks posed by facial recognition technology, writes Daniel Konikoff, who argues that Bill C-27’s failure to classify biometric data as sensitive suggests that the bill has an unstable grasp on our tricky technological present.
Uncovering gaps in Canada’s Voluntary Code of Conduct for generative AI
Want to learn more about Canada’s Voluntary Code of Conduct on the Responsible Development and Management of Advanced Generative AI Systems? SRI Policy Researchers David Baldridge and Jamie Sandhu comment on the Code’s characteristics and shortcomings after its recent release following a summer of significant developments concerning generative AI.
Regulatory gaps and democratic oversight: On AI and self-regulation
There are economic and political incentives for AI companies to create their own set of rules. Alyssa Wong explores the benefits and drawbacks of self-regulation in the tech industry, and highlights the ultimate need for democratic oversight to ensure accountability, transparency, and consideration of public interests.
Exploring user interaction challenges with large language models
We’re using AI assistants and large language models everywhere in our daily lives. But what constitutes this interaction between person and machine? SRI Graduate Affiliate Davide Gentile writes about the virtues and pitfalls of user experience, highlighting some ways in which the human-computer interaction could be made clearer, more efficient, more trustworthy, and overall a better experience—for everyone.