What’s Next After AIDA?
Canada’s first attempt at comprehensive AI regulation, Bill C-27 – which introduced the Artificial Intelligence and Data Act (AIDA) – was halted in January 2025. However, this doesn’t mean AI governance is stalled. Provinces like Ontario are advancing their own AI regulations, such as Bill 194, and federal Treasury Board instruments (along with sector-specific bodies) continue to shape the landscape.
Introduction
Canada’s first attempt at comprehensive artificial intelligence (AI) regulation halted on January 6, 2025 when Prime Minister Justin Trudeau’s resignation and prorogation of Parliament caused Bill C-27 to die on the order paper. The bill, which introduced the proposed Artificial Intelligence and Data Act (AIDA), had been making its way through Parliament since June 2022, facing significant criticism even after the government attempted to assuage concerns by releasing proposed amendments in late 2023.
Canada has taken a proactive approach to the safe governance of AI, balancing innovation with ethical considerations and risk mitigation. It was the first country to implement a national AI strategy in 2017, among the earliest to implement AI-specific regulations for government use with its 2019 Directive on Automated Decision-Making, a founding member of the Global Partnership on AI (GPAI) in 2020, and most recently launched its AI Safety Institute (CAISI) in November of 2024. AIDA was just one of many initiatives the country has taken over the past several years; however, it was among the most significant of its efforts to ensure the safe and responsible use of AI within its borders.
As such, in the wake of AIDA’s death and with a federal election on the horizon, a key question has emerged: what’s next for Canada after AIDA?
Looking beyond federal legislation
Although AIDA’s failure and the pending federal election have introduced uncertainties about Canadian AI regulation, it is important to note that federal legislation is only one piece (albeit an important one) of the AI governance landscape. AI governance has not ground to an abrupt halt with the death of AIDA. In fact, given that Canada is highly unlikely to pass successful federal AI regulation for the next few years, it will be imperative to turn to other regulatory efforts, at least in the interim, in order to sustain a safe and competitive AI industry.
Canada already has several regulatory tools shaping AI governance at the federal, provincial and sectoral levels. These and many other existing measures will be key to maintaining the safe development and use of AI in the coming years.
Treasury board policy instruments
One avenue for Canadian AI regulation as we await answers on when—and if—new federal legislation will be introduced is treasury board policy instruments. These are official rules or guidelines issued by the Treasury Board of Canada in order to guide how federal government ministries, departments and agencies like the Canada Revenue Agency, Health Canada, Federal Courts, Elections Canada, and other critical infrastructure should operate.
Treasury board policy instruments can take multiple forms, such as policies, directives, standards, or guidelines. While these instruments are not strictly hard law, federal government departments are expected to follow them, and can face various consequences if they do not, including budget restrictions, external audits, increased oversight by the Treasury Board, and other types of corrective actions. These tools can face criticism for not offering sufficiently robust oversight for government. While these criticisms are fair, treasury board policy instruments do provide a mechanism for controlling government action in the absence of legislation.
The Directive on Automated Decision-Making is perhaps the most well-known treasury board policy instrument related to AI. It governs the use of automated decision systems by federal institutions, aiming to minimize risks and ensure transparency. Additional policy instruments in this suite include the Algorithmic Impact Assessment Tool to determine the impact level of an automated decision system, and the List of Interested AI Suppliers—pre-qualified suppliers that the Government of Canada can use for responsible and effective AI services. These tools demonstrate how Treasury Board instruments influence key government functions—particularly procurement, shaping how and what governments purchase to support public services to Canadians.
Provincial regulation
Provincial regulation is a key piece of AI governance in Canada. Under the constitutional division of powers, provinces have a great deal of power over many of the areas where we might expect to see particularly large social impacts as a result of AI use, such as education, hospitals, and justice—areas where AIDA, being limited to international and interprovincial trade, would not have applied.
For example, Ontario’s Bill 194 regulates the public sector, including hospitals, educational institutions, law enforcement, and ministries of the government of Ontario (such as the Ministry of Children, Community, and Social Services; Ministry of Health; and Ministry of the Attorney General). This bill imposes obligations for the use of AI by public sector entities, including publishing information about AI use, developing and implementing accountability frameworks, and risk management. In practical terms, this means Ontarians can expect heightened protections, such as greater transparency from police when they utilize AI to assist investigations.
Bill 194 is a significant step forward, and serves to illustrate one of the benefits of provincial regulation: it is often faster than federal regulation. The bill was introduced in May 2024, and received Royal Assent in November, just six months later.
While other provinces have not passed AI-specific regulation, there is clear movement towards improving the safety of these technologies, particularly with regards to their use by government entities. British Columbia, for example, has published draft AI responsible use principles, while Quebec’s Ministère de la Cybersécurité et du Numérique (MCN) has adopted a statement of principles for the responsible use of AI by public bodies under section 21 of G-1.03.
Putting similar regulation in place across Canada’s provinces and territories would be an ideal step towards ensuring that all Canadians enjoy a similar level of protection against the potential risks posed by the use of AI in the public sector.
Regulatory bodies
At a more sector-specific level, existing regulatory bodies in Canada have made useful strides in governing the use of AI within their respective jurisdictions. For example, Canadian law societies—including those of Alberta, British Columbia and Ontario—have issued guidance for lawyers using generative AI in their practice. Ontario’s guidance took the form of a white paper outlining the Law Society of Ontario’s (LSO) expectations for how licensees can responsibly implement these technologies. Violating LSO guidelines can result in disciplinary actions, such as warnings, fines, temporary suspension of a lawyer’s license, or even disbarment.
Meanwhile, the Office of the Superintendent of Financial Institutions (OSFI) has released Draft Guideline E-23 — Model Risk Management—a principles-based guideline that sets out its expectations for model risk management by federally regulated financial institutions. These guidelines have been updated as a result of an ongoing public consultation process to address increasing utilization of AI by financial institutions to support their decision-making processes. In practical terms, this includes things like your bank making a decision about whether or not to approve a loan request, or decisions about whether or not to flag certain account activity as fraudulent.
Regulatory bodies have the power to develop new governance measures in line with the scope of authority they have been granted by their enabling legislation or mandate. In the absence of federal AI legislation, existing regulatory bodies should take inspiration from OSFI, provincial law societies, and other regulators putting measures in place to govern the use of these technologies within their scope of authority. Ministers responsible for regulatory bodies can simplify this process by issuing directives to clarify each body’s role in overseeing AI.
Conclusion
Federal legislation will likely remain an important piece of the Canadian AI governance landscape. These federal efforts are particularly important for centralization and setting out clear requirements across the country. Further, although many commenters point out that existing laws are capable of addressing some of the harms caused by AI (for example, turning to The Canadian Charter of Rights and Freedoms or The Canadian Human Rights Act to address algorithmic discrimination), AI-specific laws such as AIDA—and Bill 194—have the advantage of addressing risks at an earlier stage of the AI pipeline by setting rules and benchmarks for product/system development.
However, there is no guarantee that Canada will choose to pursue federal legislation again—particularly in an international landscape where many key players such as the US, UK, and China are opting for different AI governance strategies—nor is there a timeline for when such legislation would be put in place. Thankfully, federal legislation is far from being the only tool in our AI toolbelt. As outlined above, federal Treasury Board policy instruments, provincial regulation, and existing regulatory bodies only scratch the surface of the mechanisms available to Canada as it grapples with the creation of safe and effective AI governance.
So, what’s next after AIDA? Done right, what Canada can achieve next is a robust ecosystem of interconnected AI governance efforts that will increasingly pave the way for both safety and innovation when it comes to these transformational technologies—even in the absence of federal legislation.