Luke Stark appointed inaugural SRI Scholar-in-Residence

 

Luke Stark, an assistant professor at Western University, has been appointed as the inaugural Schwartz Reisman Scholar-in-Residence. Stark’s work interrogates the historical, social, and ethical impacts of computing and artificial intelligence technologies, particularly those mediating social and emotional expression.


The Schwartz Reisman Institute for Technology and Society (SRI) at the University of Toronto is pleased to announce the appointment of Luke Stark as its inaugural Schwartz Reisman Scholar-in-Residence.

Stark is an assistant professor in the Faculty of Information and Media Studies at Western University. Trained as a historian and in media/science and technology studies (STS), his research focuses on the historical and contemporary impacts of digital media technologies, and on developing ways to design, regulate, and otherwise govern artificial intelligence systems in the name of social justice and equality. Prior to joining Western, Stark was a postdoctoral researcher in the Fairness, Accountability, Transparency and Ethics (FATE) Group at Microsoft Research and postdoctoral fellow at Dartmouth College. He was also a fellow and affiliate of the Berkman Klein Center for Internet & Society at Harvard University.

SRI’s Scholar-in-Residence Program is designed for a local or national scholar to visit the Schwartz Reisman Institute and receive office and financial support for their research. The SRI Scholar-in-Residence will engage with the Institute’s community through participation in regular events and research activities, with the goal of developing collaborations and dialogues across disciplines that advance the Institute’s four strategic objectives: supporting interdisciplinary research towards advanced technologies, field-building, regulatory innovation, and promoting the use of AI for social good.

What are AI’s conceptual limits for decision-making?

One of the deeper issues underlying the design of AI systems is the ways in which they arrive at decisions. To what extent can a machine learning (ML) algorithm apply broad statistical patterns to discrete individual cases in a manner that is fair and accurate? In what contexts is it reasonable to infer certain outcomes based on generalised or historical data? In each case, the system must use methods of logical inference—such as deductive, inductive, or abductive reasoning—to arrive at its conclusions. But are these forms of reasoning sufficient in every situation, especially in sensitive areas like mental health and medicine, or those of public importance such as policing and social assistance?

In his work as SRI Scholar-in-Residence, Stark will build on his recently published work on the topic to develop detailed conceptual frameworks for assessing and classifying the types of inferences that ML systems deploy in their analyses and decision-making. This project will seek to inform an agenda for the regulation of AI systems based on the “reasonableness” of the inferences they produce.

Stark views the development of a classification system for inferences used by ML systems as a crucial step for developing regulations that can ensure AI technologies benefit society. Use cases for ML decisions, and the types of assumptions implicit within certain automated judgments, may not always meet the necessary criteria for what is fair and accurate, and regulators currently lack a means for measuring this aspect of AI systems. While some scholars have argued that the “right to reasonable inference” should be incorporated into data protection laws, what forms of “reasonableness” this would entail remains undefined. In some cases, Stark’s proposed system may provide grounds to reject certain use cases for ML systems before they are developed in the first place, such as those based in methods that are conjectural instead of empirical—for example, a pseudoscience like phrenology, which Stark has previously analyzed in the context of AI applications.

“Technologies can have totally unexpected and disruptive social effects, with serious ramifications in people’s everyday lives… drawing on the history and philosophy of science and critical digital studies is more vital than ever, given the amount of hype about AI.”

Aligning technical systems with human values

Stark’s project as SRI Scholar-in-Residence will contribute to a growing body of literature around principles of algorithmic governance and how human values are expressed through socio-technical systems that will be of interest to researchers from a wide range of backgrounds, as well as for regulators and developers of AI systems.

As a researcher committed to interdisciplinary collaboration, Stark is used to working with philosophers, legal scholars and computer scientists. “It’s absolutely critical to have thinkers from across disciplines ranging from the humanities through to engineering finding ways to work together to tackle these problems,” notes Stark. “No one field can do it alone.”

If successful, the project will contribute to decreasing potential societal harms caused by automated systems. “If the inferences used by certain types of AI can be shown to be conceptually faulty, then in my view no technical fix will do,” observes Stark. “I’m a huge fan of using AI systems in cases where their use is appropriate, but I don’t think we as societies around the world have quite gotten a handle on just how limited those cases are. It’s not just that these technologies can be misused or that they have technical faults, it’s also that we know from historical evidence that the core assumptions built into these systems are often dubious at best.”

As part of his research, Stark will continue to explore interdisciplinary conversations regarding how machine learning systems intersect with social and behavioural sciences such as psychology, medicine, and social work. In another ongoing project, Stark will attempt to train a machine learning model to decipher the famously illegible handwriting of the anthropologist and cybernetics pioneer Margaret Mead. “All of Mead’s notes from the Macy [cybernetics] meetings are totally illegible to a human reader,” Stark laughs. “Since she was in part responsible for the way these technologies ended up, it would be poetic justice if machine learning could help us glean new insights from her archival notes.” This project will also foster his broader research as a historian of media, which he started as an undergraduate student at the University of Toronto’s Trinity College. “U of T has a long, illustrious track record in media history,” from Innis and McLuhan up to the present, Stark points out. His book project, a history of how human emotions have been understood (and often misunderstood) by computer scientists, is under contract with the MIT Press.

“Technologies can have totally unexpected and disruptive social effects, with serious ramifications in people’s everyday lives and for the future of equitable, shared prosperity,” notes Stark. “I think drawing on the history and philosophy of science and critical digital studies is more vital than ever, given the amount of hype about AI circulating in the press.”

In addition to his research publications, Stark has published essays for public venues including The Globe and Mail, The National Post, The Boston Globe, and Slate, on topics such as facial recognition technologies, data privacy, and the future of the gig economy. He has also appeared on CBC Radio to discuss the social impacts of AI, as well as on a recent special episode of CBC’s The Nature of Things.

“In my life and work, I want to work to make sure digital technologies like AI support equity, shared prosperity, and protection from material and symbolic injustice,” says Stark. Crediting his parents for instilling a strong sense of social justice in him from a young age, Stark observes that AI systems are invariably used to reinforce existing forms of inequality. “The blows from the inappropriate use of these technologies often fall first on traditionally disadvantaged communities, but eventually they will come for all of us,” says Stark.

On November 22, 2023, Stark will deliver a public lecture as part of the SRI Seminar Series at a special in-person session at the University of Toronto’s St. George Campus on his inference work. Registration for the event is free and open to the public.

Want to learn more?


Browse stories by tag:

Related Posts

 
Previous
Previous

We, the Data: Wendy H. Wong on human rights in the age of datafication

Next
Next

Uncovering gaps in Canada’s Voluntary Code of Conduct for generative AI