Moving away from AI ethics as “window-dressing” to scientifically informed policies
In recent years, AI has become known as a fast-developing, relatively complex, expensive, and powerful tool with many applications. The combination of these qualifiers has created inevitable hype around AI’s potential benefits and pitfalls. The extreme case is often portrayed as a superintelligent agent, benevolent or malevolent, that can learn to learn—an agent with human-level intelligence but without the human constraints of energy and life-span. In this imaginary scenario, the limits of human intelligence are suddenly lifted!
This is an example of the kinds of narratives that single AI out from other tools designed and created by humans, which had, until recent years at least, diverted some of the focus from what should be appropriate uses of the AI systems currently in use or development. But as Joanna J. Bryson pointed out in her recent talk at the Schwartz Reisman Institute’s (SRI) weekly seminar, AI is “just another artifact,” an ordinary extension of human-made technologies that is now meshed into our society. AI is therefore similar to other types of tools made by humans in the sense that it is in need of governance to ensure its use for “good”.
Now, what constitutes “good AI” or “AI for good?”
Although there are existing ideas available to address this question, such as the G20’s AI Principles, these can only serve as guidelines or performance indicators for AI’s development to be human-centred, just, transparent, safe, and accountable. It is only through implementable policies that societies can, for our own betterment, demand these guidelines be met—something which the SRI is actively working on, in collaboration with the Rockefeller Foundation.
Bryson is professor of ethics and technology at the Hertie School of Governance, where her work helps to develop scientifically-informed policies on responsible AI governance by using already-existing regulatory tools and understandings of human society, ethics, and experimental and theoretical psychology. As a global voice in technology policy and digital governance, she recently shared some of her work as part of the SRI Seminar Series.
At the core of Bryson’s argument towards better AI governance are three primary assertions:
1. AI is a tool. Much like any other tool, “we author AI, not give birth to it,” says Bryson. For example, designers of AI choose how many limbs a robot has or how those limbs are programmed to operate. And with choice comes responsibility. So, to ensure that a human or company manufacturing an AI is responsible for the potential harms it may cause, we might, for example, take the stance of rejecting the idea of granting AI the status of legal personhood.
2. Replicating human behaviour often means replicating human biases. In most cases, AI learns by human example, and our overt behaviour as humans is not always our ideal. Most ideals such as equality and accessibility are collective plans for improving human behaviour, as well as agreements which are up for negotiation. As such, they are not always reflected in the overt human behaviour used to design/train AI. (This problem is similar to the ongoing discussions around “the alignment problem” in AI ethics).
It’s interesting that, through design, we teach AI systems to promote our stereotypes—the biases we do not wish to persist. And AI, always looking for regularities, patterns, and biases amongst all the variability in the world, then replicates our own prejudices.
A famous example of AI replicating human prejudices is the AI agent that learned word associations from text written by humans and replicated gender role biases. For instance, the AI agent deemed family-associated words (e.g., home, children, family) closer to female names, and career-associated words (e.g., corporation, salary, office) closer to male names (Caliskan, Bryson, and Narayanan, 2017).
3. AI can be a tool for identifying “ideal” human behaviour, thereby leading to increased public good. Public good lies in our strategies for maximizing sustainability and equality, i.e., considering how big of a pie we have, and how big of a slice everyone gets. Individual human strategies for attaining public good can vary from competing to cooperating (based on culture, resource constraints, and trust in community). But AI can be a powerful tool for finding and identifying the kinds of cooperative strategies we could use to jointly optimize for sustainability and equality.
Want to learn more?
Watch the video of Joanna J. Bryson’s talk at the weekly Schwartz Reisman seminar.
Discover the “four conversations” that guide SRI’s research on the societal effects of technology.
About the author
Shabnam Haghzare is a graduate fellow at the Schwartz Reisman Institute and a PhD candidate in the Institute of Biomedical Engineering at the University of Toronto. Learn more about Shabnam on her website.