SRI/PEARL joint project explores global public opinion on AI with contributions to 2024 Stanford AI Index
The Stanford Institute for Human-Centered Artificial Intelligence (HAI) has released their 7th annual AI Index, and this year’s edition includes data from a joint project by SRI and the Policy, Elections and Representation Lab (PEARL) at the Munk School of Global Affairs & Public Policy. The Stanford Index is considered one of the most trusted and widely-read indices about the state of AI in the world.
Global group of experts advises on concrete steps towards a robust AI certification ecosystem
When new technologies enter the world, they must earn trust. How can we create trust in artificial intelligence? A new report from the Certification Working Group (CWG) explores the necessary elements of an ecosystem that can deliver effective certification to support AI that is responsible, trustworthy, ethical, and fair.
SRI’s annual conference, Absolutely Interdisciplinary, returns in May of 2024
The Schwartz Reisman Institute’s annual academic conference will take place May 6–8, 2024, with select sessions taking place in the newly-completed Schwartz Reisman Innovation Campus located in the heart of Toronto’s Discovery District. Speakers include: Peter Railton, Harper Reed, Huili Chen, Ray Perrault, Gillian Hadfield, and more.
Five key elements of Canada’s new Online Harms Act
Canada’s federal government has released the latest draft of its online harms bill, otherwise known as Bill C-63. Below, Schwartz Reisman researchers take us on a tour through key aspects of the bill, including its taxonomy of harms, new expectations and requirements for social media platforms, and new kinds of protections for children and youth online.
The terminology of AI regulation: Preventing “harm” and mitigating “risk”
We hear certain terminology used frequently in efforts to regulate artificial intelligence. But what do we mean when we talk about “harm, “risk,” “safety,” and “trust”? SRI experts take us through the implications of the words we use in the rules we create.
Rethinking AI regulation: CIFAR policy brief explores paths forward for regulating in a new world
What’s missing from current efforts to regulate artificial intelligence? SRI researchers author a new CIFAR AI Insights Policy Brief on bracing for large-scale economic, social, and legal change—and how policymakers can adapt governance infrastructure to an economy transformed by AI.
Redefining AI governance: A global push for safer technology
SRI policy researchers David Baldridge and Jamie Amarat Sandhu trace the landscape of recent global AI safety initiatives—from Bletchley to Hiroshima and beyond—to see how governments and public policy experts are envisioning new ways of governing AI as rapid advancements in the technology continue to present challenges to policymakers.
Geoffrey Hinton fields questions from scholars, students during academic talk on responsible AI
U of T University Professor emeritus and “godfather of AI” Geoffrey Hinton delivered a lecture at Convocation Hall discussing whether large language models understand what they are doing and the existential risks posed by unfettered development of the technology he helped create.
To guarantee our rights, Canada’s privacy legislation must protect our biometric data
Amidst today’s broad social impacts of data, we must pay specific attention to the risks posed by facial recognition technology, writes Daniel Konikoff, who argues that Bill C-27’s failure to classify biometric data as sensitive suggests that the bill has an unstable grasp on our tricky technological present.
Gillian Hadfield named one of seven AI2050 senior fellows by Schmidt Futures
Seven new senior fellows, including SRI Director Gillian Hadfield, have been selected by Schmidt Futures to solve hard problems in artificial intelligence through multidisciplinary research, with up to USD $7 million in support.
Unlocking AI’s insights: SRI's “Artificial Intelligence is Here" course goes public
What do we need to do to ensure that artificial intelligence is built for public benefit? A recent course developed by the Schwartz Reisman Institute explains what AI is, where it’s headed, and what the public needs to know about it.
Uncovering gaps in Canada’s Voluntary Code of Conduct for generative AI
Want to learn more about Canada’s Voluntary Code of Conduct on the Responsible Development and Management of Advanced Generative AI Systems? SRI Policy Researchers David Baldridge and Jamie Sandhu comment on the Code’s characteristics and shortcomings after its recent release following a summer of significant developments concerning generative AI.