Rethinking AI regulation: CIFAR policy brief explores paths forward for regulating in a new world

 

What’s missing from current efforts to regulate artificial intelligence? SRI researchers author a new policy brief for the CIFAR AI Insights Policy Briefs series on bracing for large-scale economic, social, and legal change—and how policymakers can adapt governance infrastructure to an economy transformed by AI.


As rapid advancements in artificial intelligence (AI) continue to proliferate, legislators and public policy experts around the world are scrambling to regulate this powerful technology. 

But what is the focus of current regulatory efforts—and what is missing? 

Published as part of the CIFAR AI Insights Policy Briefs series, a new report by Schwartz Reisman Institute (SRI) researchers aims to examine the current state of AI regulation—and shed light on key issues that it overlooks. 

SRI Director and Chair Gillian Hadfield, Policy Researcher Jamie Amarat Sandhu, and Graduate Affiliate Noam Kolt are the co-authors of “Regulatory Transformation in the Age of AI.” The brief begins by noting that current efforts to regulate AI focus primarily on reducing harms and mitigating risks—for example, algorithmic bias and misinformation or accidents caused by autonomous vehicles. 

“These are important efforts,” the authors explain, “but their focus is incomplete and obscures the bigger picture.”  

Because AI is what economists often call a “general-purpose technology” (that is, a technology that can affect an entire economy, often at a national or global level), the “bigger picture” here is that the world must brace for nothing short of unprecedented, large-scale economic, social, and legal change.  

What the authors call the current “harms paradigm” of regulating AI is necessary but incomplete. 

Drawing on Canadian case studies in healthcare, financial services, and nuclear energy, the policy brief illustrates how AI could upend existing regulatory systems and challenge the conventional targets and tools of regulation. 

To help policymakers navigate these challenges and adapt governance infrastructure to an economy transformed by AI, the brief’s authors propose a new tool—something they call “regulatory impacts analysis” (RIA). RIA is a novel framework and procedure for analyzing the impact of AI on regulatory systems. It works by taking policymakers through a questionnaire that helps them assess the likely impact of AI on regulation, and then offers practical guidance and tools for adapting governance institutions to the new and changing conditions arising from AI. 

The “harms paradigm”—what’s missing?  

Globally, many efforts to regulate AI focus on harms from the technology. 

The EU AI Act, Canada’s Artificial Intelligence and Data Act (AIDA), and the US’s National Institute of Standards and Technology (NIST) AI Risk Management Framework all largely focus on the harms of AI.       

The brief’s authors point out that these initiatives, while important and wide-ranging, reaffirm previous critiques—as voiced, for example, in Hadfield and Jack Clark’s work on regulatory markets. As policymakers entrench the harms paradigm, they “miss the risk that AI disrupts our ability to achieve regulatory goals across other products and sectors,” say the authors. 

Put another way: AI itself might be having an effect on our existing modes of regulation. As a general-purpose technology, it may very well be disrupting and making inefficient (or obsolete) our conventional regulatory processes. This would then make our ability to regulate things other than AI quite difficult—or even impossible.  

For example, our current car safety regulations were not developed to address the advent of autonomous vehicles. In this way, the introduction of AI into vehicles has made some existing car safety regulations inefficient or irrelevant. 

So, what can regulators do differently? How can they adapt to the far-reaching changes introduced by AI? 

Through three case studies—healthcare, financial services, and nuclear energy—the brief’s author’s illustrate some of the ways in which the targets and tools of regulation could reinvent themselves for a world increasingly shaped by AI.  

Case studies in regulatory challenges 

In healthcare, regulation has typically focused on human actors and organizations, including doctors, nurses, pharmacists, and health institutions.  

But, as the brief’s authors highlight, tools like Google's Med-PaLM “arguably shift the regulatory focus from doctors and traditional medical processes to software engineers and AI products and services.”  

When regulatory targets are human professionals, educational and licensing requirements are appropriate regulatory tools. Now that we need to govern AI systems and their developers, these conventional regulatory tools are no longer sufficient. 

A similar challenge faces regulations for medical devices—many of which are discrete devices such as, say, an ultrasound machine. But AI systems are different. They are not necessarily standalone devices or products. As the brief’s authors say, AI systems are often “dynamic tools that are highly sensitive to the contexts in which they are deployed.”  

The financial sector raises similar concerns. Ordinarily, the regulation of financial services focuses on governing humans and human activity such as fraud, market manipulation, and unfair practices. But AI changes the picture. Current regulation cannot simply be transferred across to AI. For example, as algorithmic trading tools account for a growing fraction of market activity, regulatory tools focused on licensing human professionals and institutions will miss the mark.  

What is “regulatory impacts analysis”? 

Critics note that to date there is no accepted framework or procedure for evaluating the impact of AI on regulatory regimes or systems. See, for example, work by Inioluwa Deborah Raji and collaborators on auditing algorithmic auditors and AI’s sometimes unreliable functionality as a policy challenge. 

The CIFAR policy brief’s authors propose regulatory impacts analysis (RIA) to assist policymakers in understanding and anticipating the impact of AI on regulation. Specifically, RIA is designed to (1) assess the likely impact of AI on the targets and tools of regulation, and (2) support policymakers in adapting governance institutions to the new and changing conditions arising from AI. 

The authors offer a sample questionnaire asking policymakers to identify key regulatory targets and tools in their domain and assess potential gaps as AI is “deployed more widely or relied upon to a greater extent.” 

To conclude, the authors show how RIA could operate in practice by examining a concrete example: the regulation of nuclear energy and materials in Canada. Using RIA, the Canadian Nuclear Safety Commission could better navigate the sweeping changes heralded by AI. 

Disruption and adaptation 

As AI plays an ever-growing role in the global economy, the brief’s authors warn that regulators cannot simply “take a leaf out of their traditional playbook.”  

It’s increasingly apparent that AI presents complex questions that go beyond the prevailing harms paradigm. Tools like RIA, supported by flexible and adaptive regulators, can assist society in harnessing the tremendous benefits of AI, tackling the associated risks, and bracing for the technology’s transformative impact.  

Want to learn more?


Browse stories by tag:

Related Posts

 
Previous
Previous

2024 call for Schwartz Reisman faculty and graduate fellowships now open

Next
Next

Redefining AI governance: A global push for safer technology