Successful use of AI in government means doubling down on human and democratic values

 

In a recent op-ed in BetaKit, SRI Associate Director Peter Loewen argues that to unlock the benefits of artificial intelligence (AI) for the public sector, governments must double down on the importance of human and democratic values. Image: Pixabay.


“What is the future of artificial intelligence in government, and what role can it play in the success of democracies in the 21st century?” asks SRI Associate Director Peter Loewen in a recent op-ed published in BetaKit

AI technologies can play an integral role in supporting the public sector, argues Loewen, but unlocking their benefits requires insights from a source that is “rarely considered when it comes to technological innovation.”

The answer? Doubling down on the importance of human and democratic values.

New tools require new frameworks

Amidst failures in rolling out simple government apps and eroding public trust in Big Tech, recent years have seen increased skepticism that technology will deliver a better world. New systems often add to administrative complexity rather than reducing it, and the recommender algorithms that underlie digital platforms sometimes generate more conflict than cooperation, and even amplify misleading information.

However, as Loewen points out, there are massive potential benefits to public institutions properly harnessing AI technologies, including improvements in responsiveness, consistency, and opportunities for learning from data. A professor of political science at the University of Toronto, and director of the Munk School of Global Affairs & Public Policy and PEARL Research Lab, Loewen’s research explores the political implications and challenges of integrating new technologies into governance and policy frameworks.

The challenge for governments, Loewen writes, is that technologies are developing faster than legislation that determines their use.

“To use AI successfully, we need to think differently about how we craft and implement policy,” writes Loewen. “We need innovative regulatory approaches that match the speed and complexity of the task at hand.”

Peter Loewen

SRI Associate Director Peter Loewen argues public institutions must double down on human and democratic values to successfully implement AI. Photo: Alexis McDonald.

Three key insights to unlock the value of AI

Loewen contends public institutions have special obligations when it comes to implementing AI tools, and several core challenges exist at present around citizen consent.

“Citizens do not support a single set of justifications for the use of algorithms, and in fact, have a strong bias toward the status quo,” notes Loewen.

“Citizens also judge algorithms more harshly than human decision-makers, and opposition to AI is stronger among those who fear the broader economic effects of technology. In other words, the successful use of AI is tied up in broader debates about what the future of technology and society will be.”

To unlock the benefits of AI, Loewen proposes three essential insights. 

First, Loewen observes the “distributional fact” of technology: the use of AI is likely to be spread out across many tasks and jobs, rather than focused on a few. “Nearly all of us could replace some of the things we do with automation, but important parts would inevitably remain,” proposes Loewen. As a result, automation has the potential to allow us to focus more closely on the larger purposes of our work.

Loewen’s second insight concerns the “values premium”: while AI technologies excel at prediction, they often fail at understanding how humans will interpret the outcomes of a decision. This is significant because the values that underwrite an institution’s decisions are as meaningful as the decisions themselves—a factor that is more important in governments than in the private sector.

“The main protectors of our values and principles will not be machines, but those who put decisions into action,” writes Loewen. “It is here where there will be a premium on values like trust, transparency, and decency.”

Finally, Loewen argues that democracies have an advantage when it comes to implementing AI over autocratic systems of government. While autocracies stifle feedback in seeking control of their citizens, democracies invite engagement and self-criticism. In this context, the inherent shortcomings of AI—including opportunities for bias and challenges around value alignment—will be amplified by the blind spots of autocracies, whereas democracies are more likely to be self-correcting.

“This feature is what will give democracies the advantage as we work out the best ways to employ AI for social good,” writes Loewen. “It is also the right reason for us to advocate for maximum transparency, explainability, and justifiability in the public use of AI—precisely so it can be more easily critiqued and corrected.”

“Government is too often an impersonal organization… The important job of public servants, in this context, is to put a great premium on the values of trust, transparency, and decency… to ensure that AI is enhancing the human element of public service, rather than draining it from the system.” — Peter Loewen

The democratic advantage

Loewen concludes by proposing that public services are more culturally ready for the adoption of AI than any other organization, because the work of public servants reflects the same processes as the prediction technologies that underlie AI systems.

“The human cannot see all the deliberations that led to the decision, but they can know the process and the values that guided it, and they have an obligation to defend and explain not only the decision, but how it was arrived at. All these elements map onto a well-designed system of human-assisted AI,” writes Loewen.

What is needed next is to develop the necessary systems and norms—in regulation, ethics, and institutional design—to incorporate AI-driven automation in a way that builds trust with citizens through transparency, accountability, and engagement. To do so will require new forms of education and engagement that ensure that everyone is considered, and the systems we build are beneficial for everyone.

In other words, as we continue to use new technologies to solve complex challenges, we need to remember who we are and what our values are.

Want to learn more?


Browse stories by tag:

Related Posts

 
Previous
Previous

Academic papers written by AI get a solid B—but is it cheating?

Next
Next

SRI announces postdoctoral research fellowship to explore human normative systems