When the algorithm is wrong: A new partnership calls out racism in AI systems 

 
Headshot collage of Miron Clay-Gilmore, William Paris,  Sergio Tenenbaum and Karina Vold.

The Algorithmic Bias in Canada (ABC) project is confronting racism in AI systems—highlighting how tools like facial recognition and LLMs disproportionately harm Black and racialized communities, and calling for greater public awareness and equitable governance of AI in Canada. The project’s leads include Miron Clay-Gilmore, William Paris,  Sergio Tenenbaum, and Karina Vold (left to right).


Porcha Woodruff was eight months pregnant when she was falsely accused of robbery and carjacking. Robert Williams spent 30 hours in custody for a crime committed by someone else. And Quran Reid was imprisoned for a theft that occurred in a state he’d never visited.

All of these people are Black—and their cases are examples of what can go wrong with facial recognition technology, powered by artificial intelligence (AI) and used by law enforcement agencies to match video surveillance images with those from databases.

A major problem is that such technology is particularly bad at identifying Black faces, with failure rates as high as 35 per cent in the case of Black women.

And it’s not just facial recognition that’s at fault: programs driven by AI algorithms have been shown to produce erroneous loan, job, insurance or immigration decisions—all leading to even greater discrimination against racialized people.

Like other AI scholars, Karina Vold has long been familiar with this unfortunate reality. In recent years, she also started to notice that many important thinkers weren’t being consulted in efforts to highlight the problem or devise solutions.

“There weren’t a lot of people working in this field who had any background in critical race studies or philosophy of race,” says Vold, an assistant professor in the Department of Philosophy and the Institute for the History & Philosophy of Science & Technology.

“People with an understanding of the historical context in which technologies reproduce societal biases and continue to further patterns of repression.”

That’s why Vold—whose work lies at the intersection of the philosophy of cognitive science, the philosophy of technology, AI and applied ethics—decided to create a space where experts in the effects of algorithmic bias could meet, conduct research and keep the issue at the forefront of public consciousness.

“Large language models such as ChatGPT are being trained on the whole internet—the whole history of human online text. And humans have biases, so it’s not surprising that those are being uncovered as patterns in the text that the system’s trained on.”

SRI Research Lead Karina Vold is a principal investigator on Algorithmic Bias in Canada.

Large language models such as ChatGPT are being trained on the whole internet—the whole history of human online text. And humans have biases, so it’s not surprising that those are being uncovered as patterns in the text that the system’s trained on.

The initiative she’s founded, the Algorithmic Bias in Canada (ABC) project, is an interdisciplinary partnership that hopes to shape more equitable AI systems through academic collaboration, public engagement, and partnerships with industry, government and Indigenous communities. The projects sponsors include U of T’s Centre for Ethics, the Schwartz Reisman Institute for Technology and Society, and Centre for Research in Ethics (CRÉ), Québec, Chiefs of Ontario, and TELUS’s Data & Trust Office.

Along with Vold, ABC partners include principal investigators William Paris and Sergio Tenenbaum of the Department of Philosophy; SRI Postdoctoral Affiliate Miron Clay-Gilmore; and many others from the worlds of academia and industry. ABC’s activities are hosted by U of T’s Centre for Ethics, which promotes research, teaching and conversation on ethical issues. Additional collaborators include SRI Faculty Affiliate Ishtiaque Ahmed and SRI Research Lead Beth Coleman.

How does algorithmic bias happen?

“Large language models such as ChatGPT are being trained on the whole internet—the whole history of human online text,” Vold says. “And humans have biases, so it’s not surprising that those are being uncovered as patterns in the text that the system’s trained on.”

By contrast, facial recognition technology isn’t trained on the entire internet of images, but on curated data sets dominated by images of white people. “And when you look at who decides what data set to use, it’s mostly white males,” Vold adds.

AI systems can also discriminate even when race isn’t specified.

“My family’s Mexican-American,” says Vold. “Historically, people with that heritage haven’t been allowed to rent or buy houses in a lot of neighbourhoods—so in Los Angeles, for example, you have a clustering of Mexican Americans within a certain zip code. A zip code is considered enough of a ‘proxy attribute’ for an AI system to predict race.”

Finally, Vold cautions that Canadian governments and industry are “really leaning into the use of AI,” in the delivery of countless public services. She cautions that these are very often American systems—conceived in a different country, often with different purposes in mind.

Clay-Gilmore is highly familiar with the American military roots of artificial intelligence. His research explores how artificial intelligence, big data and predictive policing operate within broader regimes of counterinsurgency and state violence.

“I really think there is a crisis of human autonomy that’s occurring, and technological shifts are key to that. As we extend ourselves through technology, where is our real human agency and impact in the world? For philosophers to focus on these questions is important.”

I really think there is a crisis of human autonomy that’s occurring, and technological shifts are key to that. As we extend ourselves through technology, where is our real human agency and impact in the world? For philosophers to focus on these questions is important.

“A lot of AI researchers are not aware of the military origins of the technology,” says Clay-Gilmore, a veteran of the United States Marines who served in the global war on terrorism following the attacks of September 11, 2001. “And I think the commercial applications sometimes blind us as citizens: we lose sight of what this technology was created for.”

Headshot of Miron Clay-Gilmore

SRI Postdoctoral Affiliate Miron Clay-Gilmore examines the racialized applications of AI, big data, and predictive policing.

Early AI research was conducted with surveillance, intelligence gathering and target recognition in mind—functions that remain important to this day.

“Artificial intelligence has changed not only the pace of our society, but the way police investigate crimes and monitor citizens,” Clay-Gilmore says. He points to myriad scary incidents in recent history.

There is Clearview AI, a facial recognition technology used by law enforcement in the U.S. and Canada, which has been widely fined and banned for racial bias and privacy violations. Or Project Green Light, a surveillance initiative in Detroit that was found by the U.S Department of Justice to be both racially discriminatory and ineffective at reducing violent crime. Or Cambridge Analytica, the former consulting firm involved in a privacy scandal where the personal data of millions of Facebook users was harvested and sold to influence voter behaviour.

“What people don’t know is that the parent company of Cambridge Analytica was a defense contractor in Britain,” says Clay-Gilmore. “Its goal was subversion, countersubversion, counterinsurgency and propaganda.”

As a philosopher, Clay-Gilmore is concerned that citizens are being too complacent about the encroachment of AI on every aspect of their lives. He recently founded the Clay-Gilmore Institute for Philosophy, Technology & Counterinsurgency to further investigate such questions, and his new podcast, Algorithms of Empire, is available on YouTube.

“I really think there is a crisis of human autonomy that’s occurring, and technological shifts are key to that,” he says. “As we extend ourselves through technology, where is our real human agency and impact in the world? For philosophers to focus on these questions is important.”

Artificial intelligence is changing society, in many ways for the better. But it is far from perfect, and both Vold and Clay-Gilmore believe that some degree of public skepticism is critical to ensure accountability.

“We’re all really busy living our lives and we don’t even know that this is happening,” Vold says. “So it’s really important for Canadians to know about this, and to be aware of how it’s affecting them.”

ABC’s speaker series takes place every second Wednesday from 3:00 to 5:00 PM at the Centre for Ethics, 15 Devonshire Place. The next lecture features Professor Gideon Christian from the University of Calgary, speaking on “Algorithmic Racism in Canada.” Learn more on the ABC website.

This article was originally published on U of T’s Faculty of Arts & Science News, and is reproduced with permission.

Want to learn more?


Browse stories by tag:

Related Posts

 
Next
Next

Rethinking knowledge in the age of AI