Uncovering gaps in Canada’s Voluntary Code of Conduct for generative AI

 

The revised version of Canada’s Voluntary Code of Conduct on the Responsible Development and Management of Advanced Generative AI Systems was released after a summer of significant developments concerning generative AI. SRI Policy Researchers David Baldridge and Jamie Amarat Sandhu comment on the Code’s characteristics and shortcomings.


On September 27th, 2023, François-Philippe Champagne, Canada’s Minister for Innovation, Science, and Industry, revealed Canada's Voluntary Code of Conduct on the Responsible Development and Management of Advanced Generative AI Systems. The announcement coincides with the launch of the review of Bill C-27, which includes the proposed Artificial Intelligence and Data Act (AIDA), by the House of Commons Standing Committee on Industry, Science, and Technology. This is a clear statement of the government's aim to be comprehensive in response to AIyet this response to generative AI falls short in some areas. 

Initially developed in consultation with stakeholders as a draft document, the revised version of the Code announced by the Minister came swiftly after a summer of significant developments concerning generative AI. In large part, the Code’s voluntary measures are categorized across six core principles (Accountability, Safety, Fairness and Equity, Transparency, Human Oversight and Monitoring, and Validity and Robustness) for signatory firms developing or managing generative AI systems with general-purpose capabilities. The Code also necessitates additional measures on widely available systems to mitigate potential misuse. Critics point out that, in many respects, the Code bears a striking resemblance to the Biden Administration’s announcement of voluntary commitments by major AI companies in July 2023. Canada’s six principles significantly mirror the US’s voluntary commitments, which feature three core principles. Indeed, two of these principles appear in both the Canadian Code and the American commitments: safety and security. What sets the Code apart, however, is its prior consultation with a wide range of stakeholders from diverse sectors, in contrast to the US, which primarily engaged leading AI companies within its borders in drafting the commitments.

Although not significantly revised from its initial draft, the Code of Conduct seeks to emphasize a more robust and proactive approach across various dimensions related to generative AI. The list below summarizes—by the Code’s core principles—which measures were added, removed, and preserved from the initial draft to the version officially released by the Minister. The most significant changes include the fact that the measures no longer include specific requirements for meaningful explainability and human oversight. Additionally, they introduce new obligations for users, such as implementing comprehensive risk management frameworks, sharing best practices and information, and conducting benchmarking against recognized standards.

Accountability

Measures added:

  • Implement a comprehensive risk management framework proportionate to the nature and risk profile of activities

  • Share information and best practices on risk management with firms playing complementary roles in the ecosystem

Measures removed:

  • Develop policies, procedures, and training to ensure that roles and responsibilities are clearly defined

Measures preserved:

  • Ensure that multiple lines of defense are in place

Safety

Measures added:

  • Perform a comprehensive assessment of reasonably foreseeable potential adverse impacts

  • Implement proportionate measures to mitigate risks of harm

  • Make guidance available to downstream developers and managers on appropriate system usage

Measures removed:

  • Identify the ways that the system may attract malicious use

  • Identify the ways that the system may attract harmful inappropriate use

Fairness and Equity

Measures added:

  • Implement diverse testing methods and measures to assess and mitigate risk of biased output prior to release [NB: the only addition is the phrase “diverse testing methods.”]

Measures removed:

  • Implement measures to assess and mitigate risks of biased output

Measures preserved:

  • Assess and curate datasets used for training

Transparency

Measures added:

  • Publish information on the capabilities and limitations of the system

  • Publish a description of the types of training data used to develop the system

Measures removed:

  •  Provide a meaningful explanation of the process used to develop the system

Measures preserved:

  • Develop and implement a reliable and freely available method to detect content generated by the system

  • Ensure that systems that could be mistaken for humans are clearly and prominently identified as AI systems

Human Oversight and Monitoring

Measures added:

  • Monitor the operation of the system for harmful uses or impacts after it is made available, including through the use of third-party feedback channels

  • Maintain a database of reported incidents after deployment, and provide updates as needed to ensure effective mitigation measures

Measures removed:

  • Provide human oversight in the deployment and operations of their system

  • Implement mechanisms to allow adverse impacts to be identified and reported

Validity and Robustness

Measures added:

  • Employ adversarial testing (i.e. “red-teaming”) to identify vulnerabilities

  • Perform an assessment of cybersecurity risks and implement proportionate measures to mitigate risks

  • Perform benchmarking to measure the model’s performance against recognized standards

Measures removed:

  • Employ appropriate cybersecurity measures to prevent or identify adversarial attacks

Measures preserved:

  • Use a wide variety of testing methods across a spectrum of tasks and contexts prior to deployment

“The most significant changes to the Voluntary Code include the fact that the measures no longer include specific requirements for meaningful explainability and human oversight.”

While updated from the originally circulated draft, the current version continues to facilitate previously identified gaps, but also creates new ones. Many AI community stakeholders have ongoing concerns with regard to the Code about standards, intellectual property, procurement, data governance, and an overall narrow scope. Notably, the Code lacks specific industry or government standards in response to data quality, leading to varied interpretations among stakeholders. In another view, the absence of a framework for regular government reporting of datasets fails to enhance accountability and public oversight. By making a concerted effort to address issues of data quality through standardized expectations, policymakers would set clearer measures for reliable, accurate, and representative data. This, in turn, would help to meaningfully strengthen the core principles outlined. 

The same can be said in terms of copyright issues. The Code overlooks the necessity of acknowledging copyrighted content in AI training, potentially resulting in IP conflicts, as we’ve seen in other jurisdictions. The transparency section, while ensuring users are informed of AI interactions, misses an opportunity by not providing options for users to opt out once these systems are deployed. Furthermore, in the section on human oversight and monitoring, the Code should clarify mechanisms for identifying and reporting adverse impacts, including defined recourse and redress procedures. It also fails to promote a collaborative approach, such as obligating larger corporations to support SMEs, fostering a fairer playing field, and better adherence to measures. 

“The transparency section of the Voluntary Code, while ensuring users are informed of AI interactions, misses an opportunity by not providing options for users to opt out once these systems are deployed.”

These incomplete measures further amplify woes related to responsibility for testing, auditing, and risk management, which appears to mainly rest with developers and operators. This arrangement raises questions about the adequacy of testing and assessment methodologies and the potential for ethical implications and regulatory capture if left unclarified. Lastly, and in many ways most importantly, the Code lacks provisions for user education and training, impeding users’ ability to identify and report issues and contributing to knowledge gaps in our increasingly AI-driven economy.

While this Voluntary Code for Generative AI has significant shortcomings as currently drafted, it's important to recognize that the Canadian government plans to introduce further changes to AI regulations in the coming months. AIDA, the government’s attempt to comprehensively regulate AI, is anticipated to become law in early 2024, and its provisions will also apply to generative AI. As we navigate this ever-evolving landscape of AI governance in Canada, the key lies in bridging the gaps in our regulations to ensure powerful technologies like generative AI make the world better—for everyone.

Want to learn more?


About the authors

David Baldridge is a policy researcher at the Schwartz Reisman Institute for Technology and Society. A recent graduate of the JD program at the University of Toronto’s Faculty of Law, he has previously worked for the Canadian Civil Liberties Association and the David Asper Centre for Constitutional Rights. His interests include the constitutional dimensions of surveillance and AI regulation, as well as the political economy of privacy and information governance.

Jamie Amarat Sandhu is a policy researcher at the Schwartz Reisman Institute for Technology and Society. His specialization in the governance of emerging technologies and global affairs has earned him a track record of providing strategic guidance to decision-makers and addressing cross-sector socio-economic challenges arising from advancements in science and technology at both the international and domestic levels. This expertise is supported by an MSc in Politics and Technology from the Technical University of Munich's School of Social Science and Technology in Germany, complemented by a BA in International Relations from the University of British Columbia.


Browse stories by tag:

Related Posts

 
Previous
Previous

Luke Stark appointed inaugural SRI Scholar-in-Residence

Next
Next

Transforming diabetes care: SRI researchers secure $900K grant for AI prediction and prevention network