SRI partners with the Canada School of Public Service to train public servants on AI

 

How can artificial intelligence improve public services and help create a more sustainable future? Can governments implement AI in ways that ensure fairness and transparency? To explore these questions, the Schwartz Reisman Institute has partnered with the Canada School of Public Service to present an eight-part series designed to explain what AI is, where it’s headed, and what public servants need to know about it.


Artificial intelligence (AI) is constantly evolving and playing an increasingly larger role in our lives, transforming every sector of human activity—from healthcare, to finance, manufacturing, law, and beyond. We often read about both the great promises associated with AI and its negative consequences, such as algorithmic bias or risks associated with data production and governance. But what really is AI, and how can it help lead us to a more sustainable future? What can be done to ensure that AI is built for public benefit, and how can we mitigate the harms it might cause?

To explore these questions, the Schwartz Reisman Institute for Technology and Society (SRI) has partnered with the Canada School of Public Service to present an eight-part event series entitled “Artificial Intelligence is Here,” a unique online course designed to train Canada’s public servants on the risks and opportunities of AI, key concepts and terminology, and how AI might transform government. Each session features a mix of pre-recorded lectures and moderated live panel discussions led by scholars and industry leaders, designed to explain what AI is, where it’s headed, and what public servants need to know about it.

The series is developed by SRI Director Gillian Hadfield and Associate Director Peter Loewen, and features lectures by Hadfield and Loewen, as well as SRI Faculty Affiliate Avi Goldfarb, Policy Lead Phil Dawson, and Janice Stein, founding director of the Munk School for Global Affairs and Public Policy at the University of Toronto. Live panel participants also include SRI Research Lead Wendy H. Wong, Cary Coglianese (University of Pennsylvania), Daniel E. Ho (Stanford University), and Alex Scott (Borealis AI).

The series launched in November 2021 and runs through May 2022, with sessions delivered virtually in both English and French. The lectures are designed for and delivered exclusively to Canadian federal public servants at all levels. More than 1,000 public servants have registered for the course’s events to date.

The need for new regulatory approaches

Among the central objectives of “Artificial Intelligence is Here” is not only to familiarize public servants with the risks and opportunities associated with implementing AI technologies, but to develop greater understanding around the need for new regulatory approaches to govern these tools.

“AI and machine learning are new technologies that are not like anything we’ve seen before,” observes Hadfield in the series’ introductory session, “What is AI?,”  which explains key concepts and terms. “The forms of AI that are transforming everything right now are systems that write their own rules. It is not easy to see or understand why an AI system is doing what it is doing, and it is much more challenging to hold humans responsible... That’s why figuring out how to regulate its uses in government, industry, and civil society is such an important challenge.”

Despite the widespread adoption of machine learning techniques by industry, AI regulation is still mostly in its infancy, with ambitious goals that often rely on under-developed principles. Current examples of legislation include Canada’s Directive on Automated Decision-Making, introduced in 2019, and the European Union’s Artificial Intelligence Act—perhaps the most comprehensive regulation to date—which was introduced in mid-2021 and is still under review.

As Hadfield observes, increased regulation is essential for overcoming the negative consequences of AI, such as failures around fairness and transparency. Since its impacts are felt throughout society, the development of AI systems cannot be left to computer scientists alone—policymakers must engage with and understand AI at a foundational level. As Hadfield contends, this is especially important with regards to deciding when it is appropriate to build and deploy AI systems.

Hadfield notes that many current AI initiatives are driven by commercial incentives for targeted advertising, and incentives must be developed for the public sector to unlock AI’s true benefit in a balanced way. “If AI is going to help us solve real human problems, we need more AI built to the specs of the public sector,” proposes Hadfield. “We’ll need to get creative to make sure the AI we get is the AI we need.”

 

SRI Director Gillian Hadfield and Associate Director Peter Loewen co-developed the “Artificial Intelligence is Here” series for the Canada School of Public Service.

 

The centrality of consent and judgement

Another major challenge to the use of AI in government is public acceptance. As Loewen observes in the series’ second lecture, “Citizen consent and the use of AI in government,” four key obstacles currently challenge the implementation of automated decision-making systems in public services. First, as Loewen’s research demonstrates, citizens do not support a single set of justifications for the use of algorithms in government. Second, a status quo bias causes citizens to hold a skeptical view of innovation. Third, humans judge the outcomes of algorithmic decisions more harshly than decisions made by other humans. And fourth, apprehension towards the broader effects of automation—especially concerning issues of job security and economic prosperity—can generate increased opposition to AI. As Loewen contends, citizen consent is fundamental to effective government, meaning these obstacles must be factored in for AI to be implemented in ways that meet with public approval.

In the series’ third session, “Deciding when and how to use AI in government,” Loewen delves into concerns around automation replacing human labour, demonstrating a wide range of cases in which AI would not only help governments better serve the public, but do so without replacing human workers. In some contexts, the application of automated systems could help governments expedite decisions that are delayed due to capacity issues, enabling organizations to serve more people with greater speed and consistency. In other contexts, the use of AI could enhance the work of public servants by distinguishing between cases in which a verdict can be easily obtained and contexts that require more nuanced consideration, enabling a greater focus on cases that require more attention. As Loewen proposes, “Isn’t it a potentially better use of resources if we take those who would have previously interacted with every case, and redeploy them to situations which require more judgement, or maybe just more empathy?”

What are the challenges of implementing AI in government?

The complexities of AI technologies and the extensive roles and responsibilities of government mean that any successful combination of the two will need to overcome a wide range of challenges. Among these are issues of biased data inputs in machine learning models, concerns around data privacy and data governance, and questions regarding consent and procedural fairness, explainability and justification, and transparency. As Hadfield notes in her review of AI regulatory principles, current goals are ambitious, and will require consideration of a vast array of details to be translated into practice. Hadfield also observes that, given the pace and scale of AI advancement, the sector will also require innovative new tools and systems that can assess, monitor, and audit AI systems to ensure that they are appropriately deployed, effective, fair, responsible, and sufficiently contained within democratic oversight.

These challenges may seem immense, but the potential benefits at stake are immeasurable. As the 21st century progresses, societies have become more globally linked than at any previous point in history, while concerns around resource allocation, human rights, environmental degradation, and tensions that threaten the fabric of society are greater than ever before. Government decisions play an essential role in human welfare, and greater adoption of AI could enable public services to provide decisions with increased efficiency, improved consistency with regards to policy goals, help to ensure methods are procedurally fair and consistent with democratic norms, and—perhaps most significantly—offer an unprecedented opportunity to learn from patterns in data that are otherwise invisible to humans, and, in doing so, develop policies with better economic and social benefits that make the world a better place for everyone within it.

Want to learn more?


Browse stories by tag:

Related Posts

 
Previous
Previous

Why we shouldn’t “move fast and break things”: Shion Guha on the benefits of human-centered data science

Next
Next

Schwartz Reisman Institute appoints Monique Crichlow as Executive Director