Can a market-based regulatory framework help govern AI? New report weighs in

 
Geometric pattern

In April 2024, the Schwartz Reisman Institute for Technology and Society (SRI) hosted a workshop that brought together 33 high-level experts to explore the viability of regulatory markets—an AI governance proposal put forward by former SRI director Gillian K. Hadfield and co-author Jack Clark. Over the course of the workshop, participants identified key challenges and worked through practical steps to move from theory to operationalization, laying the groundwork for a clear roadmap toward future governance. Their findings are captured in a new report (PDF) published today by SRI.


Artificial intelligence (AI) technologies are becoming widely accessible and embedded in daily life. They raise well-established concerns such as bias and privacy, alongside emerging risks tied to national security and social stability. As these technologies advance, pressure to develop more effective governance approaches is likely to grow.

However, traditional regulatory frameworks and government oversight struggle to keep pace with the speed and complexity of AI. This often leaves a regulatory gap in which the private sector attempts to self-regulate, despite lacking clear guidance on how to adequately ensure public safety. Further, though the private sector has access to more resources in the form of financial and human capital, they are not beholden to the same commitments governments have to serve the public good. 

In short, the pace and scale of AI advancement demand a more adaptive, forward-looking approach to governance that can merge the benefits of both the public and private sector

Regulatory markets, a market-based regulatory framework for AI governance proposed by Gillian K. Hadfield, inaugural director of SRI, and Jack Clark, co-founder and head of policy at Anthropic, offers a promising solution. 

What is regulatory markets?

Regulatory markets is a three-party model that involves governments licensing and overseeing private regulatory service providers, who in turn monitor the companies developing and deploying AI systems. The goal is to create a competitive, scalable system that offers meaningful oversight without stifling innovation.

This model aims to address the core deficits highlighted above: a technical deficit, in which public regulators may lack the expertise to assess advanced systems, and a democratic deficit, in which major decisions about societal impact remain concentrated in private firms that have no formal obligation to protect the public interest or ensure accountability to society at large.

Addressing both deficits is essential to building public trust, mitigating risks, and ensuring that AI development aligns with broader societal values.

Co-designing regulatory markets workshop

On April 15, 2024, SRI hosted a workshop that brought together 33 experts from across sectors and North America to explore what it will take to operationalize regulatory markets for AI governance. Through design-thinking exercises, foresight methods, and open discussion, participants examined key opportunities and challenges shaping this approach.

The resulting report, Co-Designing Regulatory Markets: Road-mapping the Future of AI Governance, translates workshop insights into concrete milestones, strategies, and recommendations. 

It highlights that while the foundations of regulatory markets are in place, three areas need further development to move toward implementation:

  1. Government capacity. Governments need clear goals for safe AI, stronger technical expertise, and more responsive processes to oversee private regulatory service providers (RSPs). 

  2. Market infrastructure. Licensing systems, insurance mechanisms, and clear guidance are critical to authorize RSPs, manage risk, and build public trust.

  3. Implementation strategy. Pilot projects can test the approach in real-world settings and help refine licensing and evaluation tools.

To accelerate progress, the report provides practical steps to address these challenges, highlights opportunities to build on existing infrastructure and mechanisms, and emphasizes the need for ongoing collaboration across the AI ecosystem.

Increasing traction at home and abroad 

The concept of regulatory markets is no longer just theoretical.

In Canada, key players are independently applying core elements of the regulatory markets approach. TELUS Communications, in partnership with Armilla AI, conducted a bilingual, third-party evaluation of its generative AI customer service tool—assessing performance, robustness, bias, and fairness. The project highlights Canada’s growing capacity to carry out functions essential to a well-functioning regulatory market.

Yet turning these ideas into reality requires more than regulatory vision. It calls for bold, coordinated steps toward implementation. The question now is not whether this model can work, but how to make it work in the real world—a question robustly addressed in this latest report.

Ideological arguments aside, markets are good for solving problems, and when properly regulated, they are effective at attracting resources where they’re needed. … We need to be active and smart about how we regulate AI, and how we attract innovation and investment into its regulation. I believe [regulatory markets] is the model that puts us on the path to achieving that.
— Gillian K. Hadfield
The Regulatory Markets workshop participants being led by Gillian K Hadfield

Gillian Hadfield presenting at the SRI workshop “Co-designing regulatory markets” on April 15, 2024.
(Photo by Lisa Sakulensky)

This perspective reflects the spirit of the workshop and the report itself: a shift from traditional regulation to practical pathways, building regulatory markets that can meet the moment and shape a safer, more sustainable future for AI governance.

 

About the author

Jamie A. Sandhu is a policy researcher at the Schwartz Reisman Institute for Technology and Society at the University of Toronto. With several years of experience, including work at the United Nations, various European organizations, and the Government of Canada, he specializes in geopolitics, international security, and both technology governance and the use of technology to enhance governance processes. Jamie is driven to shape policy and regulation that balances industry needs, institutional integrity, socioeconomic mechanisms, and societal well-being. His dedication has earned him a track record of guiding decision-makers in tackling cross-sector socio-economic challenges arising from technological advancements and leading efforts to bridge knowledge gaps among stakeholders to achieve shared goals and common understanding. His expertise is supported by a BA in international relations from the University of British Columbia, complemented by an MSc in politics and technology from the Technical University of Munich. Jamie’s current research interests revolve around international cooperation on AI and advancing AI safety through a socio-technical approach to AI governance.


 
Next
Next

Schwartz Reisman Institute announces 2025–26 graduate fellows