SRI and Vector Institute consult on Ontario’s Trustworthy Artificial Intelligence Framework

 
SRI Director Gillian Hadfield and Vector Institute President and CEO Garth Gibson responded to the Ontario government’s proposed Trustworthy Artificial Intelligence (AI) Framework by articulating how to achieve fair and trustworthy AI while supporti…

SRI Director Gillian Hadfield and Vector Institute President and CEO Garth Gibson responded to the Ontario government’s proposed Trustworthy Artificial Intelligence (AI) Framework by articulating how to achieve fair and trustworthy AI while supporting robust investment in innovative AI technologies.


The use of artificial intelligence (AI) both in the private and public sectors is rapidly increasing, but the creation of rules to govern AI is still in early stages across the globe. It’s important to have such rules so that we have AI that  is safe and trustworthy, fosters economic growth, and improves our lives. In fact, very few jurisdictions have yet implemented concrete laws or regulations governing AI.  

The reason for these regulatory gaps is that AI technologies present new and fundamental challenges to regulators. They are complex and change very quickly. Conventional regulatory approaches—convening experts to identify risks, drafting legislation and regulations to set out legal requirements, relying on government regulators and courts to respond to problems and violations—struggle to keep up. That is a problem both for citizens, who may be exposed to unwarranted risks, and a problem for businesses, who face an uncertain and unstable regulatory environment.

Here in Ontario, the provincial government is setting out to be one of the early actors in creating a framework to ensure that AI used by the public sector is accountable, safe, and rights-based. Ontario’s Trustworthy Artificial Intelligence (AI) Framework recently began with an initial consultation phase, soliciting ideas to improve the public’s trust in AI.

As part of the consultation phase, Gillian Hadfield, director of the Schwartz Reisman Institute for Technology and Society (SRI) and Garth Gibson, president and CEO of the Vector Institute, submitted a response to the Ontario Digital Service—the arm of the provincial government spearheading the new framework. The joint SRI and Vector submission advises on how to build a digital economy that is powered by trustworthy AI, and how to achieve fairness in the use of AI while supporting robust investment in AI technologies.

READ THE SRI + VECTOR RESPONSE TO ONTARIO’S TRUSTWORTHY AI FRAMEWORK (PDF)

SRI Director Gillian Hadfield and Vector Institute President and CEO Garth Gibson.

SRI Director Gillian Hadfield and Vector Institute President and CEO Garth Gibson.

Key among Hadfield’s and Gibson’s assertions is that “the technical complexity, breadth of applications, and rapid change in techniques related to AI systems makes its governance unlike that of traditional information and software systems.”

Essentially, conventional regulatory frameworks may be ineffectual—or even counterproductive—when applied to an unprecedented and rapidly-evolving technology like AI.

So, how should we tackle this complex 21st century problem?

Among other things, Hadfield and Gibson advocate for:

  • A culture of learning and continuous improvement: Because of the dynamic nature of AI and the nascent maturity of AI governance, a flexible, iterative, and perpetually stress-tested approach is necessary to ensure that governance keeps up with rapid and perhaps unforeseen changes in AI.

  • Harnessing private sector innovation: Traditional public sector governance initiatives alone already struggle to keep up with the scope and rapid evolution of AI around the world. Establishing partnerships between the public and private sectors to foster technology investments in safe AI techniques and to build flexible and responsive certification regimes is key to achieving the goal of trustworthy AI.

Learn more about regulatory technology in Gillian Hadfield’s Rules for a Flat World. Available at Indigo and Audible.

Learn more about regulatory technology in Gillian Hadfield’s Rules for a Flat World. Available at Indigo and Audible.

Hadfield and Gibson note Ontario has a unique opportunity to “set the pace globally in developing the legal and economic environment needed to grow a sector that creates regulatory technologies and tools to improve the safety and security of AI systems that verify regulatory compliance.”

One possible direction for policy is Hadfield’s work on “regulatory markets,” which proposes a novel three-party regulation model in which governments license and oversee an industry of private regulators to monitor and keep companies who are using and developing AI in check. SRI is also partnering with Toronto’s Creative Destruction Lab to help build a pipeline of “AI complements”—regulatory technologies that support trustworthy and compliant AI development (e.g. automated removal of personal information from training data, or automated validation of fair use of machine learning).

Hadfield and Gibson’s recommendations to the Ontario government fall into four categories:

1.  Begin with pilots to identify what to count as “AI.”

The definition of what counts as AI is challenging. Algorithms are already in use in lots of domains in governments, most of which use simple and familiar approaches that do not raise the challenges of modern machine learning. Hadfield and Gibson point out that the city of New York had difficulty settling on a definition of “automated decision systems” when drafting recent legislation to regulate automated decision-making because of this. Instead of attempting a comprehensive definition, one suggestion is to focus initially on pilots in high-stakes government decision-making areas such as policing, health, and criminal justice, and develop criteria for identifying the elements that warrant regulatory oversight.

2.  Emphasize auditing as a regulatory tool.

Existing regulatory approaches to, for example, privacy law emphasize up-front impact assessment tools. But AI is much harder to reliably and comprehensively evaluate up front because of its novelty, complexity, and inherent propensity to adapt and change. “It’s important to focus resources on regular and active auditing of deployed systems, rather than on conducting exhaustive, one-time up-front reviews,” write Hadfield and Gibson. Not only will regular auditing ensure the ongoing oversight of AI—a technology that evolves at lightning speed—it will also be a crucial component in developing the right metrics and principles to guide AI policy well into the future.

3.      Audit for multiple metrics.

“What counts as ‘fair’ machine learning is contested and complex,” write Hadfield and Gibson. We cannot simply evaluate “fairness” from a select few perspectives, definitions, or outcomes. We need to establish a broad spectrum of metrics and indicators, including both qualitative and quantitative ones. 

4.      Focus on building an agile, evidence-based risk framework.

The EU’s recently proposed AI governance legislation attempts to create a definitive list of “high risk” AI systems, but Hadfield and Gibson see this as a “significant shortcoming” because definitions of what is “high-risk” AI can vary over time and context. “We believe that risk assessment for AI systems needs to be grounded in real-world experiences,” write Hadfield and Gibson, “and needs to adapt rapidly as the field advances and AI is applied in new areas.”

Hadfield and Gibson say that Ontario’s Trustworthy AI Framework, while still in early stages, has a tremendous opportunity to show global leadership in the realm of AI governance—and to spark critical private sector innovation in this area.

Want to learn more? 


Browse stories by tag:

Related Posts

 
Previous
Previous

Ethics from the bottom up: New program embeds ethics into technology design for undergraduates

Next
Next

Schwartz Reisman Institute for Technology and Society partners with Creative Destruction Lab to further the development of ethical AI