What might the Canadian AI Safety Institute look like? Reflections on an emerging national AI safety regime

 

Examining the early approaches of AI safety institutes in the UK, the US, and the EU can help us consider what scope, expertise, and authority the recently-announced Canadian AI Safety Institute will likely need in order to achieve its full potential.


In April of 2024, the Government of Canada pledged $2.4 billion toward artificial intelligence (AI) in its annual budget, including $50 million earmarked for a new AI Safety Institute.

Institutes dedicated to advancing AI safety are increasingly being established in jurisdictions around the world. These AI safety institutes (AISIs) are promising to respond to both the current and future risks associated with advances in AI—an ongoing challenge for regulators, policymakers, and developers alike. However, the specifics of the Canadian AI Safety Institute are still in development. 

This post explores potential futures for the anticipated institute by examining the early approaches of its counterparts in the UK, the US, and the EU. Drawing on learnings from these early AISIs, it considers what scope, expertise, and authority the Canadian AISI will likely need in order to achieve its full potential. 


The emergence of AI safety institutes

Over the past year, there has been an increasing global movement towards AI safety, with governments recognizing its importance and undertaking various AI safety initiatives. The UK and the US were the first countries to install AI safety institutes in fall 2023, announcing them at the first global AI Safety Summit at Bletchley Park in the UK. For the US, this announcement came right on the heels of President Joe Biden’s Executive Order on AI.

These developments were soon accompanied by larger multilateral engagements, such as the second global AI Safety Summit in Seoul in May 2024, where several countries as well as the EU signed the Seoul Statement of Intent, committing to an international network of AISIs. This period also saw the emergence of the EU’s AI Office, which includes a safety unit dedicated to fulfilling the functions of an AISI.

The UK AISI is notable for its comprehensive and interdisciplinary approach, integrating technical, social, and policy perspectives in its research on the potential harms posed by AI systems. The institute has made significant strides in international engagement through partnerships, funding opportunities, and knowledge exchange. It steers clear of direct involvement in regulation and compliance, instead focusing on building and running evaluations of publicly available AI models. It shares its insights through initiatives such as its open-source framework for large language model evaluations

The US AISI, although not as far along as its UK counterpart, has a clearly defined direction. The launch of its industry-wide consortium and the recent release of guidance documentation and open-source tools like Dioptra, an open-source software test platform, demonstrate its commitment to developing AI safety research fundamentals and assessing the capabilities of AI systems that may cause harm. It is likely that the institute will follow a similar approach to the UK, as both countries have taken a similar approach to AI regulation thus far. Unlike the UK, however, the US institute’s ability to accomplish its goals has been put into question as a result of resource constraints.

While both the UK and US AISIs are unlikely to involve any enforcement mechanisms (as both countries have thus far chosen not to enact federal AI regulation), the EU AI Office has been tasked with shaping global standards on model evaluations as well as monitoring, supervising, and enforcing the requirements of the recently enacted EU AI Act. With its greater authority, the EU AI Office is expected to investigate incidents of non-compliance; require model providers to supply documentation on training, testing processes, and evaluation results; and even restrict, recall, or withdraw models from the market if there is a serious and substantiated concern of systemic risk. Despite its strong regulatory role, the EU AI Office has also outlined plans to take on research functions similar to its UK and US counterparts.

If the Canadian AISI is to conduct model evaluations, it is necessary for large AI companies to collaborate with the institute and share their models. Here, Canada can learn from the early efforts of other AI safety institutes.

To date, the roles of these AISIs seem to be in line with the regulatory approaches of their respective countries. Since Canada’s proposed AI legislation, the Artificial Intelligence and Data Act (AIDA), follows an impact-based approach similar to that of the EU, it might follow in the EU AI Office’s footsteps by combining research and regulatory functions. 


How can the Canadian AI Safety Institute ensure it delivers on its mandate?

I) Incentivize participation

If the Canadian AISI is to conduct model evaluations, it is necessary for large AI companies to collaborate with the institute and share their models. Here, Canada can learn from the early efforts of other AI safety institutes. For example, the UK AISI has recently been under fire due to their voluntary approach to securing companies’ collaboration as it has led to very limited buy-in. If the Canadian AISI is to rely on securing voluntary commitments, it may want to consider providing incentives to private enterprises to participate in model evaluations. Canada could also learn from and adapt some of the UK’s approaches, such as the Fast Grants Programme


II) Strategic partnerships

If the expectation is for the Canadian AISI to do more than convey findings for policy and regulatory development, it would benefit from greater authority and reach than what it would be afforded if integrated into a single governmental department. A phased maturity based approach might involve building out a network of strategic collaborations between the Canadian AISI and various federal and provincial government departments. For example, the Canadian AISI may wish to develop a partnership with the AI and Data Governance Standardization Collaborative under the Standards Council of Canada, taking inspiration from the US AISI, which is housed under the National Institute for Standards and Technology (NIST) and can draw on existing processes and expertise in order to develop trusted standards for AI. 

If the expectation is for the Canadian AISI to do more than convey findings for policy and regulatory development, it would benefit from greater authority and reach than what it would be afforded if integrated into a single governmental department.

Another benefit of the Canadian AISI setting up far-reaching partnerships with different government departments would be its ability to align on critical areas of AI policy where there are overlapping accountabilities and potential to avoid the data silos that exist within the Canadian government. Individual departments and agencies within the government possess large amounts of data, yet it is extremely difficult for it to be shared or accessed by others. Given the importance of intersector expertise and collaboration for the Canadian AISI to fulfill its function, bypassing this problem through a special provisions agreement could be a crucial factor in ensuring its effectiveness. 

Finally, strong strategic partnerships throughout government would allow improved alignment and coordination on related AI policy. Having an AISI that is engaged across various regulatory mechanisms and sectors would bring greater cohesion to the policy and regulatory landscape. 


III) Enforcement powers

Granting greater authority to the Canadian AISI could also look like following the example of the EU AI Office and granting the Canadian AISI certain enforcement powers, such as pre-market approval powers, or granting the ability to compel companies to provide access to their AI models in cases where they refuse to collaborate, a suggestion that has been posed to the UK AISI. Pre-market approval powers could allow the Canadian AISI to block the release of any AI systems tested during the pre-deployment phase that do not meet current safety standards or that pose excessive risks. The ability to compel access to AI models could fulfill a similar function in ensuring ongoing safety even after deployment. Again, implementing this type of authority would likely require an iterative approach, as we do not currently have the legislation that would be necessary to achieve this. Part of this iteration would likely involve implementing new regulatory approaches such as the creation of a national AI registry that allows government to better monitor large AI models and their known capabilities. 

Strong strategic partnerships throughout government would allow improved alignment and coordination on related AI policy. Having an AISI that is engaged across various regulatory mechanisms and sectors would bring greater cohesion to the policy and regulatory landscape.


Building a strong foundation for AI safety in Canada

Ultimately, Canada can learn a lot from the current AI safety landscape when finalizing the design of the proposed new Canadian AISI. It is in an advantageous position, given that it can build on the successes of and criticisms faced by its counterparts to start strong on the delivery of its mandate.

To help ensure the Canadian AISI can deliver on its mandate, Canada may want to incentive participation in its model evaluation or testing programs, ensure the institute is well-integrated across government departments through strategic partnerships, and explore granting enforcement powers through legislation such as AIDA to underpin any regulatory functions the AISI may be expected to undertake. These suggestions could help guide early strategic decisions on the structure, scope, and authority of Canada’s AISI. 


Want to learn more?


About the author

Headshot of Sarah Rosa in colour.

Sarah Rosa is a summer research assistant at the Schwartz Reisman Institute for Technology and Society. She is a second-year law student at the University of Toronto Faculty of Law. She has previously worked as a compliance officer for a network of higher education institutions and the Investor Protection Clinic. Her interests include AI governance and regulation, as well as privacy and data protection. 

 

Browse stories by tag:

Related Posts

 
Previous
Previous

Shedding some light on the SRI summer research assistant program

Next
Next

From mourning to machine: Griefbots, human dignity, and AI regulation