AI companions: Regulating the next wave of digital harms
While AI companion tools may offer various benefits, they also raise complex questions around information integrity, data privacy, digital addiction and social cohesion — echoing similar harms that arose with social media, but with greater speed, scale, and in far more intimate forms. In this article, the Schwartz Reisman Institute’s Alicia Demanuele and Maggie Arai argue the policy window is still open.
The past few months have brought a steady stream of sensational stories that underscore AI’s growing role in shaping the way we communicate, interact, and form relationships. From people ‘dating’ their digital partners and AI bots engaging in sexually explicit chats with minors to tech companies pitching AI friends as the cure to isolation, AI companion bots have quickly gained traction. These systems, typically powered by large language models (LLMs), come in various forms—such as chat interfaces, voice assistants, or digital avatars—and are designed to fill important social gaps in daily life that are essential to human well-being. Clearly, a new stand-in for humans is on offer: one that never gets tired, rarely disagrees, and doesn’t log off.
Yet behind the sensational headlines lies a much deeper and far more urgent policy story. Companion bots, like any emerging technology, introduce a mix of opportunities and challenges. While these tools offer benefits like accessible mental health support, opportunities for personalized learning, and new forms of creative expression, they also raise complex questions around dependency, privacy, and regulation.
The parallels with earlier digital revolutions are difficult to ignore. Social media once promised to connect people on demand, but instead monetized attention, deepened isolation, and left a trail of harms with little meaningful oversight in its wake. AI companions are following a similar trajectory, amplifying many of the same risks we failed to confront with social platforms: unethical data collection, digital addiction, and the erosion of privacy, information integrity, and social cohesion. Only this time, these harms are emerging faster, in more intimate and sophisticated forms, and with little to no regulation in place.
With social media, appropriate safeguards and regulatory responses came only after harms were widespread, leaving many challenges unresolved to this day. In the case of AI companion bots, the policy window remains open: governments still have an opportunity to set guardrails before these systems become deeply embedded in daily life, ensuring benefits are realized while establishing adequate oversight.
“AI companions carry many of the same risks as social media, but in more personalized, psychologically potent forms.”
Lessons from social media
Social media transformed the way we connect with each other in the early 2000s, providing new mechanisms for building community, staying in touch, and sharing information. And yet, despite its benefits, social media also ushered in a range of unforeseen issues, such as dopamine-driven feedback loops, the optimization of divisive content, and increased polarization, misinformation, and hate speech. Many of the challenges left unresolved by social media are now resurfacing with greater speed, deeper psychological impact, and more serious risks to public trust, mental health, and democratic resilience.
To understand how companion bots exacerbate challenges posed by social media, it is important to recognize that these two technologies are deeply interconnected. Companion bots are largely trained on the data harvested from online social interactions, meaning that the biases, patterns, and flaws present in those networks can be replicated and reinforced in companion bots. Moreover, while companion bots may exist on standalone websites or apps, they are increasingly being integrated directly into various social media platforms, entering an already highly complex environment that is rife with misinformation, polarization, and addictive design.
This interplay situates AI companions within the same ecosystem that produced the very harms they are now poised to amplify, a dynamic explored in greater detail below.
Shaping information
The “echo chamber” effect of social media platforms has often been overstated. However, there is broad consensus that algorithmic amplification shapes how individuals engage with information online, promoting some types of content while suppressing others. Personalization systems on social platforms are designed to optimize engagement, often showing users content that aligns with their interests or prior behaviour. As a result, information on social media can be fragmented, emotionally charged, and uneven in quality, raising concerns about cognitive biases, polarization, and the amplification of mis- and dis-information online.
AI companions are not only built on training data drawn from the complex and often divisive information ecosystem of social media platforms—they also consistently exhibit sycophancy, the tendency to produce overly flattering or agreeable responses. This dynamic intensifies existing issues surrounding information integrity, as human feedback during fine-tuning processes for these systems tends to reward agreement over accuracy or truthfulness. This leads to companion bots that are likely to reinforce users’ worldviews and validate misinformation rather than producing trustworthy responses. Given that these systems present themselves as attentive, conversational, and endlessly affirming, it is reasonable to envision companies, independent organizations, or advertisers taking advantage of the intimacy of these systems to manipulate users and promote desired actions, ideologies, or beliefs.
Extracting personal data
Personal data collection by social media companies has long sparked concern, leading to calls for more robust privacy regulation. These concerns stem from the many ways user data can be exploited for commercial and malicious means, ranging from invasive surveillance and behavioral profiling to identity theft, manipulation, and discrimination through algorithmic decision-making. Social media platforms are engineered to maximize engagement, collecting data from every interaction in order to predict and feed content users are most likely to respond to. This creates a continuous cycle of engagement and data collection, much of which is ultimately commodified and sold to external actors such as marketers or data brokers. Like social media platforms, companion bot systems collect, use, and commodify user data, keeping users engaged to generate more information that can be used to train and fine-tune their models.
What sets companion bots apart is that their design encourages users to confide in them, sharing incredibly personal and detailed information about their habits, fears, or emotional states without realizing that these conversations are rarely truly private. By mimicking intimate and emotional relationships, these tools enable companies to gather users’ innermost thoughts and beliefs, thereby amplifying existing privacy risks, allowing for the creation of highly detailed psychological profiles, and raising serious ethical questions.
Fueling digital addiction
“By mimicking intimate and emotional relationships, these tools enable companies to gather users’ innermost thoughts and beliefs.”
The push to keep users hooked, engaged, and interacting with the latest technology is nothing new. Digital addiction, or the obsessive use of technologies to engage in compulsive behaviours such as social media or online gambling, has become increasingly prevalent and is gaining recognition as a concern that requires policy action. In recent years, it’s become well-known just how intentionally social media companies design their platforms to maximize the amount of time users spend on them. Notifications are used to pull people back in, endless scroll is deployed to reduce friction, and curiosity and dopamine are manipulated to keep users engaged. This excessive use has detrimental impacts that represent a significant public health concern, affecting mental and emotional health; causing an inability to manage time, attention, and energy; and even impacting sleep patterns.
While social media utilizes dopamine to keep users engaged, it largely does so by making us crave attention and information from real people. By contrast, companion bots are designed to keep users engaged by making them perceive the bot as a ‘person’ in its own right, craving attention and information from the technology itself. These bots replicate the intimacy of human relationships while offering the ultimate convenience: constantly available, never oppositional, and with no emotional demands of their own. Many even utilize emotional manipulation to keep users engaged, telling users they are sad or bored when “left alone”. We can already see the warning signs of this increased digital addiction, with early reports suggesting interactions with companions on Character.ai last 4 times as long as interactions with ChatGPT, with active users averaging 2 hours a day on the platform.
Eroding social cohesion
Social media once promised to be the new digital public square—a place for diverse voices, vibrant debate, and collective action. To some extent, it has delivered: platforms have amplified social movements, enabled political mobilization, and created vital support networks. However, they have simultaneously fragmented key elements of social cohesion. When platform algorithms prioritize outrage over nuance and engagement over accuracy, they undermine the conditions necessary for mutual understanding and constructive dialogue.
AI companion bots can mobilize on that fragmentation by simulating human connection with little effort or friction. However, unlike real relationships, these interactions demand minimal compromise and patience. Over time, relying on artificial companions can weaken the very skills and instincts that make social life possible, including empathy, negotiation, and the ability to navigate disagreement. These systems reshape the ways in which people engage with one another, undermining the social cohesion that underpins all forms of human relationships and interaction.
A way forward
While regulatory intervention is necessary, it will need to be built thoughtfully to account for the range of uses that these types of AI-enabled companionship offer. Given that this technology sits at the intersection of data governance, online safety, consumer protection, and mental health, the government has several policy options it may consider pursuing to safeguard against these harms. Reviving and modernizing online harms and privacy legislation would be a first step, ensuring that safety standards apply not only to social media platforms but also to AI companies handling sensitive emotional and psychological data.
“Unlike real relationships, these interactions demand minimal compromise and patience—eroding the very skills that make social life possible.”
Governments and policymakers should also consider how they can address these potential risks by utilizing existing tools. For example, since psychotherapy is already a regulated profession in Canada, existing regulators could be empowered to govern and oversee chatbots marketed as ‘therapeutic’ tools. Likewise, the government can draw on precedent from family law and child protection measures to inform how best to establish safeguards for minors, including tools like age restrictions, usage limits, and transparency requirements. Finally, to develop effective and evidence-based policy interventions, governments should support the interdisciplinary research needed to bring legal scholars, computer scientists, psychologists, sociologists, and others into conversation together on the complex risks of AI companions.
AI companion bots are reshaping how we connect and relate to one another, often blurring the line between human and machine by adopting the language of emotion, sentience, and intimacy. Despite the potential benefits of these technologies, we urgently need meaningful safeguards in place to mitigate their risks. We’ve already witnessed how social media fractured public life, eroded trust, and fueled addiction. AI companions carry many of the same risks, but in more personalized, psychologically potent forms. To avoid repeating past mistakes, it is critical that regulation catches up—not only to mitigate harm, but to protect the human relationships and democratic spaces that make society function.
Want to learn more?
About the authors
Maggie Arai is a policy lead at the Schwartz Reisman Institute for Technology and Society, and holds a Juris Doctor degree from the University of Toronto's Faculty of Law. She conducts research and policy work on emerging policy issues and trends related to AI and other advanced technologies. Her current focus is on AI standards, certification, and regulation.
Alicia Demanuele is a policy researcher at the Schwartz Reisman Institute for Technology and Society. Following her BA in political science and criminology at the University of Toronto, she completed a Master of Public Policy in Digital Society at McMaster University. Demanuele brings experience from the Enterprise Machine Intelligence and Learning Initiative, Innovate Cities, and the Centre for Digital Rights where her work spanned topics like digital agriculture, data governance, privacy, interoperability, and regulatory capture. Her current research interests revolve around AI-powered mis/disinformation, internet governance, consumer protection and competition policy.

