A possible future for expanding cognition: Ted Chiang shares thoughts on being a cyborg
Acclaimed science fiction author Ted Chiang reflects on these and related questions: What is the relationship between technology and human cognition? How have writing and language been deployed as technologies throughout human history? And what does the future of computers hold—will it give rise to a new kind of cognitive technology?
Automated decision-making in courts of law: A conversation between Nathalie Smuha and Abdi Aidid
Can algorithmic decision-making help clear backlogs in the courts, and is this a justified use of the technology? Do automated systems make “better” decisions than human judges, and what do we mean by “better”? Should legal professionals be involved in the design of automated systems, and if so, how? Nathalie Smuha and Abdi Aidid discuss these and related questions.
Five key elements of Canada’s new Online Harms Act
Canada’s federal government has released the latest draft of its online harms bill, otherwise known as Bill C-63. Below, Schwartz Reisman researchers take us on a tour through key aspects of the bill, including its taxonomy of harms, new expectations and requirements for social media platforms, and new kinds of protections for children and youth online.
The terminology of AI regulation: Preventing “harm” and mitigating “risk”
We hear certain terminology used frequently in efforts to regulate artificial intelligence. But what do we mean when we talk about “harm, “risk,” “safety,” and “trust”? SRI experts take us through the implications of the words we use in the rules we create.
What are LLMs and generative AI? A beginner’s guide to the technology turning heads
What is generative AI? How do large language models work? SRI Policy Researcher Jamie Sandhu lays the groundwork for understanding LLMs and other generative AI tools as they increasingly permeate our daily interactions.
A new generation reflects on data and human rights
Undergraduate students at the University of Toronto reflect on what they learned from attending a book launch event on data and human rights—and how they see the future unfolding in the digital age.
Redefining AI governance: A global push for safer technology
SRI policy researchers David Baldridge and Jamie Amarat Sandhu trace the landscape of recent global AI safety initiatives—from Bletchley to Hiroshima and beyond—to see how governments and public policy experts are envisioning new ways of governing AI as rapid advancements in the technology continue to present challenges to policymakers.
To guarantee our rights, Canada’s privacy legislation must protect our biometric data
Amidst today’s broad social impacts of data, we must pay specific attention to the risks posed by facial recognition technology, writes Daniel Konikoff, who argues that Bill C-27’s failure to classify biometric data as sensitive suggests that the bill has an unstable grasp on our tricky technological present.
Uncovering gaps in Canada’s Voluntary Code of Conduct for generative AI
Want to learn more about Canada’s Voluntary Code of Conduct on the Responsible Development and Management of Advanced Generative AI Systems? SRI Policy Researchers David Baldridge and Jamie Sandhu comment on the Code’s characteristics and shortcomings after its recent release following a summer of significant developments concerning generative AI.
Regulatory gaps and democratic oversight: On AI and self-regulation
There are economic and political incentives for AI companies to create their own set of rules. Alyssa Wong explores the benefits and drawbacks of self-regulation in the tech industry, and highlights the ultimate need for democratic oversight to ensure accountability, transparency, and consideration of public interests.
Exploring user interaction challenges with large language models
We’re using AI assistants and large language models everywhere in our daily lives. But what constitutes this interaction between person and machine? SRI Graduate Affiliate Davide Gentile writes about the virtues and pitfalls of user experience, highlighting some ways in which the human-computer interaction could be made clearer, more efficient, more trustworthy, and overall a better experience—for everyone.
Why Geoffrey Hinton is worried about the future of AI
University of Toronto Professor Emeritus Geoffrey Hinton—the computer scientist ‘known as the Godfather of AI’—explains why, after a lifetime spent developing a type of artificial intelligence known as deep learning, he is suddenly warning about existential threats to humanity.