Redefining AI governance: A global push for safer technology

 

Rapid advancements in AI have presented significant challenges to policymakers working to safeguard society. SRI policy researchers David Baldridge and Jamie Amarat Sandhu trace the landscape of recent global AI safety initiatives—from Bletchley to Hiroshima and beyond—to see how governments and public policy experts are envisioning new ways of governing AI.


One year ago, the release of OpenAI’s ChatGPT ushered in a new era of artificial intelligence (AI), challenging policymakers to safeguard society in a new AI landscape. Legislative plans that were in the works when ChatGPT burst onto the scene, like the EU AI Act and Canada's Artificial Intelligence and Data Act (AIDA), faced new upheavals. Expert commentators—including researchers from the Schwartz Reisman Institute—called for international policy synergy and a new way of governing AI.  

In response, global actors and organizations have introduced new initiatives and commitments to promote safe and secure AI use. Though real policy progress in this area remains distant, a renewed focus on the idea of so-called “AI safety” is emerging. Once relegated to specialized areas of technical research, AI safety has now entered the popular lexicon among stakeholders working on the impact of AI on society–from policymakers and legislators to ethicists, educators, sociologists, and more.   

In what follows, we trace recent developments in and references to AI safety.  

Want to know more about AI safety? Read “Key Concepts in AI Safety: An Overview” from Georgetown University’s Center for Security and Emerging Technology (CSET).

AI safety in Canada and the U.S.

Public policy on AI in North America has undergone significant developments, shifting from a previously ambivalent stance to a more committed approach towards responsible and secure AI. For example, U.S. President Joe Biden's recent Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, issued on October 30, 2023, addresses numerous AI policy issues focusing on AI safety. In parallel, the Canadian government’s AIDA has been subjected to proposed amendments to ramp up its focus on AI safety. 

Regulatory tools to promote a balanced approach between public safety and the benefits of AI innovation are evident in these comprehensive policies. The U.S. Order, in particular, includes testing standards and requirements, acknowledges the diverse risks AI poses—from cybersecurity threats to weapons of mass destruction, and aims to control potential fraud and deception by generative AI technologies. Crucially, it mandates that those who are training AI models notify the government during the training and that those who are stress-testing AI models to expose their weaknesses share the results of their tests.  

Canada's legislative response to AI, through AIDA,  is currently under review and is expected to undergo amendments that also align with the broader AI safety theme. Although not explicitly labeled as addressing AI safety, the proposed amendments fall within this scope. Among other requirements, the Canadian government has indicated that the amendments will likely aim to enforce the assessment of risk and impact, rigorous testing requirements, and clear identification of AI-generated content for specific AI systems. 

These particular policy efforts in the U.S. and Canada illustrate the expanding scope of AI governance efforts aimed at fostering more safe and secure AI use. They also give increased meaning to the emerging term "AI safety," a term that is also gaining prominence in global conversations.

Global AI governance and safety

In the latter part of this year, the global community has also witnessed an unmistakably consistent narrative focused on AI safety emerging in international collaboration on AI governance.  

Notable examples include the G7 Leaders' Statement on the Hiroshima AI Process (October 30, 2023), the International Dialogues on AI Safety at Ditchley Park (October 31, 2023), and the AI Safety Summit at Bletchley Park (November 1-2, 2023). These events—while unique in their approaches—represent diverse efforts that collectively signify a global movement towards developing and deploying technological innovation in ways that ensure public safety in the age of AI.

These gatherings each offered distinct perspectives on AI safety, but often mirrored the work of members of the SRI research community. 

For example, the Hiroshima process emphasized creating an AI ecosystem that is safe, secure, and trustworthy and that extends crucially beyond the developed world, aiming to ensure AI's benefits are realized globally, including in developing and emerging economies. The outcomes from the Hiroshima meetings highlight the need for closing digital divides and achieving digital inclusion, drawing parallels to recent work by SRI Faculty Affiliate Wendy H. Wong on improving digital literacy and recognizing that the challenges facing us are not individual ones, but collective ones.  

The Ditchley Park meetings showcased what can be likewise achieved through a rich global exchange of ideas among scientists, policymakers, and industry leaders. Discussions centered on advancing AI in a manner that prioritizes human-centric values, ethical standards, and the need to address uncontrolled AI development. Notably, the work of SRI Director and Chair Gillian Hadfield and collaborators on proposing the creation of mandatory national registries for AI models was accepted as a recommendation by stakeholders. Additionally included were recommendations that leading AI developers allocate a portion of their budgets to AI safety efforts, and that government agencies similarly provide increased funding for academic and non-profit AI safety and governance research initiatives. The Ditchley dialogues underscored the importance of interdisciplinary research in AI safety, stressing the need for AI systems that prioritize human wellbeing and societal prosperity. 

An especially pivotal moment in the global landscape of AI governance was the Bletchley summit, which brought together leading AI experts, industry leaders, and representatives from 28 countries including the EU and China to focus on actionable strategies to mitigate AI's risks and ensure safety for all. The summit highlighted the potential dangers of regulatory inaction and challenges brought by so-called “frontier AI,” defined by Hadfield and collaborators as “highly capable foundation models that could possess dangerous capabilities sufficient to pose severe risks to public safety.”

This includes highly capable AI models that could exceed current technological benchmarks, and echoes concerns voiced in work by SRI Graduate Affiliate Noam Kolt and in interviews with Hadfield. In response, the Bletchley summit emphasized the importance of innovative regulatory frameworks such as, for example, the notion of “regulatory markets” proposed by Hadfield and Jack Clark. It also emphasized a more concerted effort for international regulatory regimes, as suggested by SRI researchers and others, to ensure compliance with safety standards in the development and deployment of advanced AI models. 

Together, these global meetings and conversations collectively underscore the increasing importance of a cohesive international AI policy landscape that prioritizes safety at its core. We can draw important insights from these gatherings that suggest a future roadmap for AI policy and the development of AI systems that are both innovative and safe. The ideal result would be a more globally integrated and comprehensive approach to AI governance, akin to the framework suggested by Hadfield and colleagues. This strategy would establish an international regulatory regime for advanced AI development and implementation, safeguarding against collective risks while promoting shared progress. 

While this goal recently seemed less likely, it’s becoming more realistic as its urgency grows.

Where do we go from here?

International coordination and international agreement are two distinct phenomena. While international coordination involves the practical collaboration of nations on shared objectives, international agreement emphasizes the formal consensus on governing mechanisms. Nora von Ingersleben-Seip underscores this by noting that there has been successful international coordination in adopting AI technical standards, but a notable lack of agreement on ethical standards due to divergent values among countries—a trend likely to persist without innovative approaches. This disconnect could also, crucially, undermine AI safety. 

After a year of unprecedented development in AI capability and concurrent international efforts to address the pressing impacts and risks, 2023 appears to be closing with AI safety at the forefront of global concerns. Many scholars contend that going forward, clear and concrete AI safety standards will likely be the main substantive requirements of any regulatory approach.  

Although nations are independently progressing in this area, albeit with different trajectories, the question remains: What does 2024 hold for AI safety? Will there be an ability to translate the coordination recently achieved into some form of agreement—to ensure a future of safe AI for all?  


Want to know more?


About the authors

David Baldridge is a policy researcher at the Schwartz Reisman Institute for Technology and Society. A recent graduate of the JD program at the University of Toronto’s Faculty of Law, he has previously worked for the Canadian Civil Liberties Association and the David Asper Centre for Constitutional Rights. His interests include the constitutional dimensions of surveillance and AI regulation, as well as the political economy of privacy and information governance.

Jamie Amarat Sandhu is a policy researcher at the Schwartz Reisman Institute for Technology and Society. His specialization in the governance of emerging technologies and global affairs has earned him a track record of providing strategic guidance to decision-makers and addressing cross-sector socio-economic challenges arising from advancements in science and technology at both the international and domestic levels. This expertise is supported by an MSc in Politics and Technology from the Technical University of Munich's School of Social Science and Technology in Germany, complemented by a BA in International Relations from the University of British Columbia.


Browse stories by tag:

Related Posts

 
Previous
Previous

Rethinking AI regulation: CIFAR policy brief explores paths forward for regulating in a new world

Next
Next

Kelly Lyons appointed Schwartz Reisman Institute interim director; Gillian Hadfield to remain as chair