The terminology of AI regulation: Preventing “harm” and mitigating “risk”

 
Computer generated illustration of a human head with a pen and brain inside, next to a stack of books. Primary colour scheme.

You may hear similar terminology repeated across the spectrum of initiatives designed to regulate artificial intelligence. But what exactly do we mean when we talk about “harm,” “risk,” “safety,” and “trust”? SRI Research Lead Beth Coleman and Policy Researchers David Baldridge and Jamie Amarat Sandhu take us through the implications of the words we use in the rules we create.


Regulatory initiatives designed to address the impacts of artificial intelligence (AI) and ensure public safety are springing up worldwide. But across the wide variety of these initiatives, we tend to hear similar terminology repeated.

“Risk” and “harm,” for example, have become prevalent in discussions concerning AI, and you’ve no doubt encountered “safety” and “trust” as well. In what follows, we offer a brief overview of the terms “risk” and “harm,” and a guide to the ways in which they’re being deployed in AI oversight initiatives worldwide.

In a subsequent article, we’ll take a similar look at “safety” and “trust.” We consider here initiatives from different jurisdictions including the European Union’s (EU) AI Act and Canada's Artificial Intelligence and Data Act (AIDA). The EU AI Act employs risk as a metric to establish corresponding safety requirements. By contrast, AIDA adopts a harms-based approach, classifying systems based on their level of impactful harm.

As this type of language increasingly dominates debates, taking a close look at the distinct meanings and necessary interrelationship of the terms at play is critical to understanding proposed AI regulation. What are the underlying meanings of these terms in the contexts in which they’re being used? What kinds of narratives are we telling ourselves about AI with the words and concepts we choose?

Our aim is to advocate for clarity in (and a more informed and proactive approach to) these discussions in order to better shape the future of AI and our efforts to integrate it responsibly into our society.

The words we use shape the future we create

Ideas about the future often direct the course of innovation, particularly through the use of specific terms. For example, “self-driving” or “autonomous” vehicles, once the stuff of science fiction, are now shaping automotive innovation. And the terms we use to drive innovation are adaptable, too. Consider technology labelled as “smart.” Initially tied to “smartphones,” the term is now applied to a wider range of devices and concepts, including smartwatches and smart cities.

We apply flexible terms that reflect our evolving contexts and visions of future innovation. And the same goes for terms that make up our social infrastructure. “Democracy,” for instance, has evolved and been adapted; its meaning has changed over time since its original conception.

As Schwartz Reisman Institute researchers noted in their recent report, AI is what economists often call a “general-purpose technology” (that is, a technology that can affect an entire economy, often at a national or global level). So, the world must brace for nothing short of unprecedented, large-scale economic, social, and legal change. 

This makes it all the more important to look carefully at the terms we apply to AI technologies. As AI systems become increasingly embedded in various sectors and increasingly imbued with attributes of “intelligence,” the terms “harm” and “risk” may acquire new dimensions and ask us to adjust our understandings.

What do we mean when we say AI causes “harms” and “risks”?

“Harms” are most often seen as the damage that AI systems can cause through human reliance on their capabilities.

We can understand “harm” as extending beyond physical damage to include psychological, ethical, and socio-political dimensions, where the misapplication or misunderstanding of AI capabilities can lead to significant societal disruptions. 

For example, the flow of information on social media platforms like Facebook and Instagram is overwhelmingly curated by AI algorithms. Research has shown this can create echo chambers or spread misinformation, thereby influencing public opinion, changing political landscapes, and impacting societal norms and democratic processes.

This raises significant concerns, particularly when AI is tasked with complex decision-making in high-stakes environments such as healthcare, law, or financial services. In these contexts, overestimating AI’s capabilities—thinking, for example, that its predictions are “accurate” and “just”—could lead to ethical dilemmas, accountability issues, and tangible harms.

The key to regulating AI harms, then, is to evaluate where its use carries the potential for serious, systemic harms and introducing proportionate regulatory mechanisms to prevent said harms. Consider the emerging activity of “red-teaming,” an authorized attack upon a system that aims to expose its weaknesses. This not only evaluates the potential for harm in AI systems, but it’s also a tool increasingly called upon in regulatory approaches

“Risk” is a similarly multifaceted word, encompassing not only technical failures or nefarious uses of AI but also more elusive negative outcomes such as biases in decision-making processes, the erosion of privacy, value-misalignment, and unintended societal consequences like widespread job displacement.

It also implies something taking place at a future time—something that has not yet occurred, but might. A clear example of AI risk in action is Amazon’s use of an AI tool for recruiting. Such a tool, in theory, could mitigate human biases and promote fairer hiring practices. In practice, the tool favoured male candidates over female candidates, reinforcing gender discrimination in the workplace. The use of AI in cases such as this, despite the risk, often reflects a belief that AI systems possess faculties which they perhaps do not. The key to regulating risks is anticipating this mistaken belief and mitigating risk before it turns into tangible harm.

How should regulators think about “harm” and “risk”?

Often, regulators have had to take reactive measures after AI-related harm has occurred. But there’s a growing need to focus on reducing risk and preventing future harmful impacts.

This requires regulators to adopt new perspectives.

In finance, for example, the increasing use of AI for algorithmic trading now accounts for a substantial fraction of market activity. This introduces new kinds of risks, like  “collective risks,” as Teresa Scassa has pointed out, such as the risk of market manipulation and financial crimes through open-source AI tools, which could destabilize financial systems. These scenarios illustrate a) the complex nature of risk in AI, beyond the kind of individual economic harm covered by AIDA; and b) the necessity for a more meaningful response.

We face similar challenges in healthcare, where the adoption of AI for diagnostics and treatment recommendations could lead to mis-diagnoses or biased medical decisions if not closely monitored. Here, the terminology of “risk” becomes pivotal; inadequate regulation risks nothing short of eroding public trust in AI-integrated healthcare and undermining the entire healthcare system's integrity.

Likewise, in the nuclear energy sector, AI systems employed for predictive maintenance or operational optimization might inadvertently increase the risk of catastrophic failures or security breaches if not rigorously regulated.

Across these sectors, and others, the nuanced understanding and articulation of risk in the context of AI is crucial for developing appropriate regulatory responses.

Risk or harm: what should we focus on when regulating AI?

Risks and harms are both important.

But we hope to draw attention to the importance of centering risks because of the currently unpredictable issues that advanced AI systems such as generative AI present. Ensuring an adequate focus on risks can prevent serious harms from materializing, and possibly even guide the path of innovation to promote safety and build trust.

As we’ve seen throughout the history of innovation and social progress, our choice of language shapes our understanding of ideas and concepts—so, deliberate use of the right terminology is perhaps the first step towards effective regulation.

If we over-extend or misuse particular words, we face the danger of diluting their meaning and impact. When evaluating the various challenges raised by AI, policymakers should be mindful of whether they are dealing with a risk, a harm, or something else. Doing so can focus the problem in question and prevent misaligned governance. To stop present harms and future risks, we first need to be clear what it is that we are talking about.

Want to learn more?


About the authors

David Baldridge is a policy researcher at the Schwartz Reisman Institute for Technology and Society. A recent graduate of the JD program at the University of Toronto’s Faculty of Law, he has previously worked for the Canadian Civil Liberties Association and the David Asper Centre for Constitutional Rights. His interests include the constitutional dimensions of surveillance and AI regulation, as well as the political economy of privacy and information governance.

Beth Coleman is a research lead at the Schwartz Reisman Institute for Technology and Society and an associate professor at the Institute of Communication, Culture, Information and Technology and the Faculty of Information at the University of Toronto. She is also a senior visiting researcher with Google Brain and Responsible AI as well as a 2021 Google Artists + Machine Intelligence (AMI) awardee. Working in the disciplines of science and technology studies, generative aesthetics, and Black poesis, Coleman’s research focuses on smart technology and machine learning, urban data, civic engagement, and generative arts. She is the author of Hello Avatar and a founding member of the Trusted Data Sharing group, and her research affiliations have included the Berkman Klein Center for Internet & Society, Harvard University; Microsoft Research New England; Data & Society Institute, New York; and the European Commission Digital Futures. She served as the Founding Director of the U of T Black Research Network, recently released Reality Was Whatever Happened: Octavia Butler AI and Other Possible Worlds (K Verlag, Berlin), and is currently overseeing SRI’s working group on trust in human-ML interactions.

Jamie Amarat Sandhu
is a policy researcher at the Schwartz Reisman Institute for Technology and Society. His specialization in the governance of emerging technologies and global affairs has earned him a track record of providing strategic guidance to decision-makers and addressing cross-sector socio-economic challenges arising from advancements in science and technology at both the international and domestic levels. This expertise is supported by an MSc in Politics and Technology from the Technical University of Munich's School of Social Science and Technology in Germany, complemented by a BA in International Relations from the University of British Columbia.


Browse stories by tag:

Related Posts

 
Previous
Previous

Five key elements of Canada’s new Online Harms Act

Next
Next

What are LLMs and generative AI? A beginner’s guide to the technology turning heads