SRI appoints Bruce Schneier as visiting senior policy fellow
Global security expert and author Bruce Schneier—known for reshaping how the world understands security, privacy, and trust—has joined the University of Toronto’s Munk School of Global Affairs & Public Policy and the Schwartz Reisman Institute for Technology and Society (SRI) as a visiting senior policy fellow for 2025–26.
AI companions: Regulating the next wave of digital harms
From AI chatbots marketed as digital partners to voice assistants designed for intimacy, these systems promise connection while raising urgent questions about privacy, manipulation, and digital addiction.
Data privacy and governance for Canadian innovation: SRI responds to Canada’s implementation of global privacy certifications
The Schwartz Reisman Institute for Technology and Society responds to Canada’s consultation on global privacy certifications, outlining how CBPR and PRP can strengthen data protection, build public trust, and drive innovation in the digital economy.
Democracy rewired: SRI essay series explores safeguarding democratic values in the age of AI
In a new essay series, the policy team at the Schwartz Reisman Institute for Technology and Society examines AI’s impact on the values underpinning democratic societies and governance. The series explores how AI, if left unchecked, may impact democracy – offering both an opportunity to reaffirm democratic values and critically assess the role of AI governance and regulation.
Future Votes: Safeguarding elections in the digital age
In October 2024, the SRI co-hosted a half-day event with The Dais and Rogers Cybersecure Catalyst to address election integrity, cyber security, and disinformation in the age of AI. The result was The Future Votes report, a reflection of key insights and recommendations for policymakers on how we can practically protect our democratic elections.
What’s Next After AIDA?
In the wake of AIDA’s death and with a federal election on the horizon, a key question has emerged: what’s next for Canada after AIDA?
Unequal outcomes: Tackling bias in clinical AI models
A new study by SRI Graduate Affiliate Michael Colacci sheds light on the frequency of biased outcomes when machine learning algorithms are used in healthcare contexts, advocating for more comprehensive and standardized approaches to evaluating bias in clinical AI.
Shedding some light on the SRI summer research assistant program
For the third consecutive year, the Schwartz Reisman Institute of Technology and Society opened its doors to a select group of Juris Doctor (JD) students through its summer Research Assistant (RA) program. Learn more about this year's research projects and how our RA partnership with the Future of Law Lab has opened new insights and experiences for students interested in AI governance.
What might the Canadian AI Safety Institute look like? Reflections on an emerging national AI safety regime
In April 2024, the Government of Canada pledged $2.4bn toward AI in its annual budget, including $50m for a new AI Safety Institute. What scope, expertise, and authority will the new institute need to achieve its full potential? We examine the early approaches of similar institutes in the UK, US, and EU.
From mourning to machine: Griefbots, human dignity, and AI regulation
Griefbots are artificial intelligence programs designed to mimic deceased individuals by using their digital footprint. Griefbots raise significant concerns about data collection and implications to human dignity. This article explores the digital afterlife industry and the ethical and legal challenges it presents, including a consideration of health, privacy, and property laws in Canada.
The smart way to run smart cities: New report explores data governance and trusted data sharing in Toronto
A new report from SRI Research Lead Beth Coleman, SRI Graduate Fellow Madison Mackley, and collaborators explores questions such as: How can we facilitate data-sharing across divisions to improve public policy and service delivery? What are the risks of data-sharing, how can we mitigate those risks, and what are the potential benefits of doing it right?
Harming virtuously? Value alignment for harmful AI
The field of AI safety emphasizes that systems be aligned with human values, often stating AI should “do no harm.” But lethal autonomous systems used for firearms and drones are already harming people. How can we address the reality of purposely harmful AI systems? SRI Graduate Fellow Michael Zhang writes about a panel of experts exploring this topic.

