AI in the friend zone: Rethinking companionship

 

Can we have genuine relationships with AI systems? At the third Technophilosophy Soiree, conceived of and led by SRI Research Lead Karina Vold (right), leading thinkers from a variety of disciplines explored the social and ethical implications of AI companionship. Onstage, from left to right, are Jocelyn Maclure, Michael Inzlicht, Anastasia Kuzminykh, and Jelena Markovic.


This September, the Schwartz Reisman Institute (SRI) opened its doors for the third Technophilosophy September Soiree, bringing scholars, students, and community members together for a lively evening of discussion on one of today’s most pressing questions around AI: is ChatGPT our friend?

In partnership with U of T’s Institute for the History & Philosophy of Science & Technology, the Centre for Ethics, and Victoria College—and sponsored by Periscope Lab—SRI convened a panel of experts to reflect on the rise of AI companions and their role in shaping our social and emotional lives, as well as the broader shift in how people relate to technology.

Led and moderated by SRI Research Lead Karina Vold, the event reflected the interdisciplinary emphasis at the heart of the institute’s mission.

“It’s such a vital time to be gathering for these conversations,” said Vold. “When we enable philosophers and social scientists to collaborate with technologists, we invite challenging questions and thoughtful debate. Those conversations couldn’t be more important to the current discourse.” 

From tools to companions

To kick off the evening, Vold, an assistant professor at U of T’s Institute for the History & Philosophy of Science & Technology, began by asking for a show of hands: how many present had used an AI chatbot? The sea of raised hands made clear just how thoroughly these tools have integrated with everyday life.

With the increasing sophistication of AI chatbots like ChatGPT and Claude, their use has expanded well beyond productivity and problem-solving. For many people, these systems are now part of their emotional existence, offering users conversation, advice, and even therapy. This trend raises profound questions: if AI can simulate human emotions and cognitive capacities—like empathy, memory, or kindness—at what point do these interactions cross into the territory of friendship? And moreover: can AI even have the capacity for friendship in the traditional sense?

The evening’s discussion began by interrogating our tendency to anthropomorphize AI systems. Panelist Anastasia Kuzminykh, an assistant professor in human-computer interaction at U of T’s Faculty of Information (and SRI faculty affiliate), explained: “An anthropomorphized perception is something that happens mindlessly. We tend to assign human-like qualities—like emotional reactions or cognitive processes—to non-alive objects.” This natural tendency allows us to better understand complex systems and relate to the world around us. 

“This doesn't only happen with AI systems,” she added, “but the more intelligent a system is and the harder it is for us to understand, the more we are likely to anthropomorphize it.”

Michael Inzlicht, a professor in U of T’s Department of Psychology and newly appointed research lead at SRI, noted that while we anthropomorphize much of our environment, AI poses unique challenges.

“We’ve never interacted with a thing that talks back to us—something that can actually respond to us—and of course we are going to anthropomorphize, of course we are going to see it as real. And AI can be dangerous because of that fact.”

Panelists observed that this danger becomes more insidious when anthropomorphization meets sycophancy—the tendency for AI models to be overly flattering or agreeable with users. Because AI systems are trained to align with their users’ preferences, they often reflect those users’ views with little resistance. Kuzminykh pointed to the emotional ease this creates. “It’s much easier to interact with someone who is never going to show you a negative emotion—especially for children,” she said. 

The rest of the panel similarly cautioned that this apparent ease makes sycophancy particularly risky, especially where vulnerable groups are involved or where safeguards are needed to protect trust and wellbeing.

Meeting emotional needs?

The panelists discussed that AI companions could indeed provide meaningful support in certain contexts—reducing loneliness, offering therapy-like interactions, or filling social gaps in a fast-paced, isolating world where many lack meaningful friendships. On the other hand, however, some contended that AI companions may also cause people to eschew actual human relationships and foster dependence, leaving users vulnerable to emotional manipulation—intentionally or not.

This raised some of the night’s most pressing questions: can humans experience genuine relationships with AI systems? How do we define genuine relationships? Can AI companions meet our emotional needs? Can they participate in experiences of love, friendship, or grief—beyond that which is simulated?

For Jocelyn Maclure, professor and Stephen A. Jarislowsky Chair in Human Nature and Technology at McGill University, a core issue in defining human relationships is reciprocity. “We came up with many concepts to better understand human nature, human life and human experience. It seems to be that a genuine relationship involves some form of mutual recognition—you need two conscious moral agents who recognize each other as worthy of concern and care,” he said. 

Genuine companionship, Maclure continued, requires a mutual concern for one another’s wellbeing—something AI cannot deliver. “I am among those who think we should always describe them as tools, as algorithmic assistants that can be useful. I want to resist the impetus to anthropomorphize them, and I don’t think they qualify as forms of genuine relationships or companionships.”

Inzlicht responded by pointing to his recently-published study which found that humans evaluated AI-generated responses as more empathetic than other humans in certain contexts. While human compassion has limits, he noted, AI does not tire. “AI is not conscious—it doesn't have emotions. If you don’t have consciousness or emotions, you cannot be empathic, but AI can still produce empathic statements that make people feel heard and cared for, and that is a remarkable thing.”

Building on the theme of unmet needs, Jelena Markovic, a postdoctoral researcher at Université Grenoble Alpes, urged the audience to consider the broader social context. She asked the audience to consider the true meanings of love and grief, and suggested that our current ecosystem fails to provide the social conditions people require. 

“It’s a structural problem,” she said. “We don’t form communities in the same way anymore. And instead of this problem being addressed on a societal level, people are turning to AI companions.”

Kuzminykh suggested that the rise of AI companionship reveals as much about our social environment as it does about machines.

“The fact that people turn to these types of agents for the type of interaction they desire says a lot,” she said.

Looking ahead

The panelists suggested that the road ahead for this debate is anything but straightforward.

“This is something we need to figure out a society and it will take us some time to figure out,” said Kuzminykh. “We need to try to understand why particular processes are happening and design around them.”

Maclure suggested that designers and developers could focus on establishing clear standards of transparency. He emphasized the duties this would place on creators of AI systems, while also noting the potential need for certain prohibitions to ensure ethical use.

Vold’s Periscope Lab at U of T, one of the event’s sponsors, represents a significant forum for this debate. Their work investigates the ethical, social, and epistemic implications of artificial intelligence across diverse domains, ranging from long-term AI safety and existential risk and the use of AI in medicine and healthcare to the role of AI in human learning and discovery, the challenges of algorithmic bias in Canada, and the impact of emerging cognitive technologies. Through interdisciplinary projects and widely cited publications, the lab examines both opportunities and risks—seeking to guide responsible development, deployment, and governance of AI systems that increasingly shape human life and society.

As artificial intelligence continues to evolve, so too will the philosophical questions that surround its use. As we continue to chart new territory, how might AI companionship alter our understanding of intimacy, authenticity, and the kinds of connections we consider essential to being human?

Want to learn more?


About the author

Alicia Demanuele is a policy researcher at the Schwartz Reisman Institute for Technology and Society. Following her BA in political science and criminology at the University of Toronto, she completed a Master of Public Policy in Digital Society at McMaster University. Demanuele brings experience from the Enterprise Machine Intelligence and Learning Initiative, Innovate Cities, and the Centre for Digital Rights where her work spanned topics like digital agriculture, data governance, privacy, interoperability, and regulatory capture. Her current research interests revolve around AI-powered mis/disinformation, internet governance, consumer protection and competition policy.


Browse stories by tag:

Related Posts

 
Previous
Previous

David Duvenaud reflects on post-AGI workshop

Next
Next

AI companions: Regulating the next wave of digital harms