AI regulation in Canada is moving forward. Here’s what needs to come next.

 

Canada took an important step towards effectively regulating artificial intelligence when Parliament completed its second reading of the Artificial Intelligence and Data Act (AIDA) in April 2023. In a new op-ed published in The Hill Times, Gillian Hadfield, Maggie Arai, and Isaac Gazendam note there is still much to do to ensure that this agile regulatory framework is put to effective use.


Canada took an important step towards effectively regulating artificial intelligence (AI) when Parliament completed its second reading of the Artificial Intelligence and Data Act (AIDA) on April 24, 2023. But there is still much to do to ensure the agile regulatory framework offered by AIDA is used effectively while not stifling industry’s use of this transformative technology.

Regulatory mechanisms do not move quickly. Advanced technologies do. AIDA must be dynamic and capable of responding to rapid advancements in technology. That means leaving specifics to regulation, as opposed to including them in legislation. It will also mean setting—and sticking to—ambitious timelines for implementing regulations.

What’s the difference between legislation and regulation?

While legislation provides a legal framework for what can or cannot be done, it’s up to regulation to provide the details of specifically how to do or avoid doing those prescribed activities. The broad parameters of legislation are set by politicians who are directly responsible to the electorate. Regulation, on the other hand, is developed and enacted by expert agencies granted the power to do so by legislation, through a less political process. As such, regulations can be much more responsive and agile than legislation.

AIDA is very general legislation. It states the domains to which it applies and creates a foundation for the directions the government wants to take. It then promises to fill in the details—to lay bricks on the foundation, so to speak—with regulations. For example, AIDA states that “high-impact” AI must undertake certain in-depth assessments. However, it doesn’t define “high-impact,” nor does it say what the assessments should be.

AIDA is very general. That’s good for innovation.

The decision to leave specifics to regulation is a sound strategy. We can keep laws agile by setting out specifics in regulation, rather than solidifying them in legislation. 

However, any benefits from unspecific legislation will be largely undone if supporting regulation is developed on the two-year timeline currently anticipated by the government. Not only does this leave organizations using AI uncertain about the allowability of their actions, it suggests that future updates to these regulations will also move slowly—naturally leading to the current skepticism about AIDA. 

 

Gillian Hadfield is a professor of law and strategic management and the director at the Schwartz Reisman Institute for Technology and Society at the University of Toronto, where Maggie Arai and Isaac Gazendam are both policy researchers.

 

How can Canada ensure our regulatory efforts truly remain agile?

Part of the anticipated two-year timeline allows for consultations, feedback, and revisions. While this is important, Canada can put itself leagues ahead by simply selecting a few low-barrier initiatives to push through quickly, offering increased certainty and stability to industry and investors. One potential course of action is creating so-called “safe harbours.”

What are “safe harbours”?

Safe harbours allow organizations to continue AI use and innovation without facing uncertainty about legal repercussions. Safe harbours set out specific guidelines for acceptable AI and make it clear that, for the time being, organizations following these guidelines or performing to certain minimum standards will not be held responsible.

For example, an organization could deploy an algorithmic decision-making system and be protected from liability if a certified auditor examines the system and declares it fair. Early regulations could create the apparatus for certifying some auditors and implementing the use of safe harbours, with the added benefit of driving the development of the AI audit space.

Stakeholders worry that the bare-bones formulation of AIDA cannot solve the uncertainty currently being faced by industry and investors. However, the combination of AIDA and safe harbours would help mitigate this uncertainty. Indeed, it would do so in a way that holding off on AI legislation (as many are pushing for) could never accomplish.

Consider algorithmic discrimination. Without a safe harbour, a company developing AI model that, for example, decides which applicants receive loans, currently faces legal uncertainty as to what is expected of them in order to mitigate potential algorithmic bias. If Canada were to hold off on AIDA or even follow its current two-year regulatory trajectory, the company’s options are to refrain from using the model or to move forward and risk repercussions. However, with safe harbours in place and the promise of regulation to follow, the company could, for example, place its product on the market after undergoing an impact assessment and performance test by certified organizations.

Not everything need be solved by inventing something new, either. The government could leverage existing regimes to cover many concerns about AI right now. Consider the recent white paper published by the Schwartz Reisman Institute for Technology and Society examining how existing consumer protection laws can be harnessed to control the use of AI in the financial services sector.

With the framework offered by AIDA and regulatory safe harbours, broad uncertainty for industry could be effectively mitigated, allowing for safe, trustworthy AI to flourish. Rather than stifling innovation, effective AI regulation has the ability to unlock even greater potential while ensuring that citizens remain protected from the risks posed by rapidly advancing technologies.

This op-ed originally appeared in The Hill Times.

Want to learn more?


Browse stories by tag:

Related Posts

 
Previous
Previous

Schwartz Reisman Institute welcomes 2023 fellowship recipients

Next
Next

Risk and uncertainty: What should we do about AI?