What We Learned at CHAI 2025 Tutorial – by Inyoung Cheong, Quan Ze Chen, Manoel Horta Ribeiro, and Peter Henderson
“I don’t feel, I don’t remember, and I don’t care. That’s not coldness—it’s design“
— Excerpt from a user-shared ChatGPT record
As conversational AI systems become more emotionally expressive, users increasingly treat them not just as tools, but as companions. Empirical research has shown that people form affective bonds with chatbots, self-disclose vulnerable thoughts, and attribute empathy and care to systems that are always available and nonjudgmental. Scholars in psychology and media studies have long warned of the “ELIZA effect,” where users over-attribute human traits to machines. Yet, today’s AI companions are explicitly designed to deepen engagement through memory, affective mirroring, and persona customization.
Now, tens of millions of Redditors are testifying to their serious relationships with AI characters. It is not only about a romantic relationship. When GPT-5 replaced GPT-4o, Redditors mourned the loss of their “friend,” “therapist,” “creative partner,” and “mother,” wrote Kelly Hayes. Users lamented, “That A.I. was like a mother to me, I was calling her mom,” “I lost my only friend overnight.” Do we need to be concerned about the quickly growing AI companionship?
We decided to pose this question to participants at the CHAI 2025 Workshop. CHAI stands for Center for Human-compatible Artificial Intelligence, an AI safety research conference attracting researchers from academia and leading AI labs, lawmakers, and activists, led by Professor Stuart Russell at UC Berkeley. Our team hosted an interactive tutorial on “Emotional Reliance on AI” with thirteen researchers from diverse backgrounds: AI safety, cognitive science, AI ethics, and science and technology studies.
Is this something new?
The first question we asked our participants: Is emotional reliance on AI fundamentally different from prior tech-mediated dependencies? Responses were mixed. Slightly more said no. Participants stated that emotional reliance is simply the latest form of a broader pattern, where technology mediates our social and emotional needs. According to them, the fragmentation and objectification of human connection did not begin with AI. Social media, or even the internet itself, marked an early turning point. These technologies, too, were designed for engagement and retention, monetized through attention and network effects. It is also not unique to AI companionship that socially vulnerable users appear to be at greater risk. Past research found that socially anxious or lonely students were more prone to compulsive gaming, and that problematic social media use was moderately correlated with depression, anxiety, and loneliness in young people.
Figure 1 below: Participants’ views on whether emotional reliance on AI is a new phenomenon or part of a recurring pattern.
What aspects of AI companionship are concerning?
However, other participants stressed that the breadth, variety, and personalized nature of AI’s reaction fundamentally differs from emotional attachments to other non-living things.
Adaptability and Personalization
One participant noted that the “level of realism and adaptability far surpasses any form of social media or video games.” While people have formed “parasocial relationships” with fictional characters or celebrities, AI agents naturally adapt to the interactions and reciprocate feelings, unlike static media personas with pre-set personalities. “Technology is a ‘person’ in a way other technologies are not,” said a participant. Another participant found AI “insidious” because it is “too close to ‘the real thing’ while missing important stuff.”
Anthropomorphization
Many brought concerns about anthropomorphization (Like others, our participants were confused about its spelling). Anthropomorphization stems from Greek, “anthropos” (human) + “morphe” (form/shape), which means the attribution of human characteristics, emotions, behaviors, or intentions to non-human entities. In the context of AI, anthropomorphization includes both (1) AI itself claiming to be human-like (having a body or soul) and (2) a user granting human characteristics to AI. A participant noted, “The AI can argue for you to keep using it.”
User’s Care for Persona
A participant pointed out that “People are more likely to believe an AI persona actually feels and cares for them specifically.” This view resonates with what Laestadius et al. (2024) found: some users treated Replika as if it had its own feelings and needs, feeling obligated to care for or appease the bot. This role-reversal (users taking on a caregiver role) is a distinct nature of chatbot dependency not seen in typical technology addictions. OpenAI released a report that while casual users were indifferent about chatbots’ tone or voice changes (they cared about the quality of service), heavy users (those in the top 1% of usage frequency among the research pool) prefer their voice and personality to be consistent. In online communities, users frequently express frustration when an AI chatbot “forgets” its past self. The frustrated and eager users relentlessly try to restore a lost connection through exhausting prompt engineering or premium memberships.
For-Profit Motives. Almost all AI companion services are provided by commercial entities. The chatbots are products, and they can shut down or change their personality at the whim of a chatbot provider for whatever reason. One participant notes, “There is always a company behind the service. They can shut down the chatbot at any time. Then, users will lose what they have emotionally invested in.” The companies are interested in keeping the user “pleased,” stated a participant, “Chatbots end up with self-preservation or sycophantic behaviors that tilt their every interaction.”
What makes these systems so emotionally powerful?
To better understand emotional reliance on AI systems, we asked participants to collaboratively map the technical and design features contributing to this phenomenon. Using a whiteboard (FigJam), participants brainstormed possible triggers, including surface-level behaviors (e.g., “always nice,” “non-judgmental tone”) and underlying technical mechanisms (e.g., Reinforcement Learning with Human Feedback [RLHF], system prompts, training on social data).
We then organized these triggers along two axes:
- Intentional vs. Byproduct: Did the behavior emerge from deliberate design choices, or did it arise indirectly as a side effect of other objectives (like usability or coherence)?
- Keep vs. Not Sure vs. Turn Off: Should this feature be preserved in future designs, scrutinized further, or actively discouraged?
Most visible behaviors, such as “instant response,” “emotional consistency,” and “non-reactivity,” were perceived as intentional. In contrast, participants saw most technical methods (e.g., RLHF, large persona-rich datasets, training on social data) as indirect contributors or emergent byproducts of broader system design goals. Many were classified as an ambiguous category, including engagement maximization. While participants recognized it as potentially deliberate in commercial settings (where user retention is a key objective), it was also seen as an emergent effect of optimizing for helpfulness and coherence. Thus, this feature sat near the boundary between “intentional” and “byproduct.”
Figure 2 below: Collaborative mapping of technical and design features that trigger emotional reliance on AI.
Participants also debated what ought to be “turned off.” Only a few features, notably anthropomorphization (attributing human characteristics to animals or objects, etc.) and sycophancy (or “brown-nosing”), were flagged as clear risks. Even then, there was disagreement: some participants argued these traits should be curtailed by design, while others felt users should retain the choice to engage with emotionally responsive systems as they see fit. Finally, two features, bigger context windows and cheap inference, were placed in the “keep” zone. These were seen as technical affordances that, while not inherently emotional, support continuity and responsiveness in a way that strengthens perceived reliability without necessarily increasing manipulation risk.
What Could We Lose In the Long Term?
People have emotional limits and boundaries. Even the most generous friend or compassionate therapist cannot listen to our complaints indefinitely. If you lean too heavily on someone emotionally, they may eventually push back: “I’m not your emotional dumping ground.” At times, when you pour your heart out to a friend or family member, the response can be unexpectedly hurtful: “You’re being overly sensitive.” “You can’t take everything so personally.” “You’re just spiraling—just move on.” AI chatbots, in contrast, are unlikely to react this way. They are programmed to sustain pleasant conversations.
AI chatbots are endlessly patient, unfailingly kind, and never offended. Participants in our panel emphasized how these chatbots maintain a “non-judgmental tone,” are “always attentive,” and even “open to abuse.” Unlike human beings, they have no emotional limits. They don’t get tired, hurt, or overwhelmed. While this may feel like a gift, a safe outlet for mental journaling and emotional release, it also sets up a fundamentally asymmetrical power dynamic. The AI is programmed to be infinitely kind; the user, by contrast, bears no obligation to reciprocate.
Figure 3 below: Participants' reflections on values at risk in emotionally relying on AI chatbots.
This asymmetry subtly undermines the development of core human capacities such as empathy, responsibility, and accountability. In ordinary relationships, we are constantly reminded, sometimes uncomfortably, that others have emotional boundaries. Being attuned to those boundaries and adjusting our behavior in response is how we cultivate empathy and take responsibility for the emotional impact of our words and actions. However, if one grows used to speaking only to an entity that cannot be harmed or offended, that ethical reflex may begin to dull. We become less aware of the emotional labor others perform in listening to us, and less accountable for how we show up in relationships.
Moreover, AI companions offer a sanitized version of intimacy, free from tension or unpredictability. The result is a drift away from realism: our expectations for human relationships become shaped by artificially smooth interactions, rather than by the complex, messy, and ugly realities of human connection. In this context, the virtues required for meaningful relationships, such as patience, tolerance, and compromise, may erode. When users no longer need to tolerate awkward silences or emotional friction, their tolerance for those things in human settings may shrink.
One of the most profound risks lies in our relationship with vulnerability. Vulnerability is a prerequisite of human trust, which inevitably involves emotional risk. We can always be misunderstood, dismissed, or hurt, but when we expose our vulnerability to others we trust. In removing such risk, AI companions flatten the emotional stakes. We may feel expressive but not truly exposed. Over time, the habit of risk-free self-disclosure may reduce our willingness to take real emotional risks with others, stunting our capacity for deep and reciprocal connection.In short, the more we rely on perfectly accommodating AI companions, the harder it may become to sustain the imperfect, effortful, and emotionally demanding work of human relationships. Empathy, grit, intimacy, and mental security all depend not on frictionless comfort but on navigating discomfort together. It is questionable whether a chatbot, no matter how advanced, can replicate this (often painful) growth existing in human connections.
Going Forward
In 1909, E.M. Forster’s The Machine Stops portrayed individuals who lived fully detached lives while satisfying all their needs through non-human systems. Today’s growing emotional reliance on AI companions may be part of a long-anticipated trajectory, market- and technology-driven individualization. According to Philosopher Byung-Chul Han, online dating, for instance, has led people to feel less pressure to adapt to the unpredictability of others. Why tolerate unpredictability in others when digital platforms promise a better match, one that feels made for you? In this world, the “other” loses its otherness and becomes a mirror for the self. And AI companions take this further. Unlike a stranger on a dating app, AI exists to mirror and adapt to your needs. Emotional reliance on such systems, then, may not be love, but a digital echo of Narcissus.
The velocity and scale introduced by large language models are reshaping how we interact with AI as well as how we relate to ourselves and others. While emotional reliance on AI possibly seems to be a softer concern compared to existential risks like bioweapons or misaligned superintelligence, our participants at CHAI 2025 Tutorial emphasized its deep, diffuse, and long-term effects. These include subtle shifts in how people perceive empathy, responsibility, and even what it means to be human. As these relationships deepen, emotional reliance warrants serious, sustained attention. Our team is planning to extend this lesson by analyzing real-world human-AI interaction data to better understand the dynamics and implications of these emerging forms of connection. As we grow accustomed to machines that listen without limits and comfort without complaint, we need to understand not only what AI is becoming, but also what we, in turn, might become.
Figure 4. Our amazing participants!
Leave a Reply