🤖 AI Summary
This study investigates the impact of large language model (LLM)-driven AI companions (e.g., Character.AI) on users’ psychological well-being, addressing the central question of whether AI can substitute for human relationships in fulfilling social needs. Method: Drawing on 1,131 survey responses and 4,363 authentic chat sessions (413,000 messages), we employed a mixed-methods design—triangulating self-reported motivations, open-ended narratives, and expert discourse annotation—to identify critical risk patterns, complemented by multidimensional statistical modeling. Contribution/Results: We uncover a novel “risk synergy mechanism”: high companion-use intensity, deep self-disclosure, and preexisting social support deficits jointly and significantly reduce well-being. Users with smaller real-world social networks exhibit greater AI reliance, yet intensive, high-disclosure AI interaction fails to compensate for human connection deficits and instead exacerbates mental health risks. This is the first empirical demonstration of such synergistic risks, informing ethically grounded AI design and targeted interventions.
📝 Abstract
As large language models (LLMs)-enhanced chatbots grow increasingly expressive and socially responsive, many users are beginning to form companionship-like bonds with them, particularly with simulated AI partners designed to mimic emotionally attuned interlocutors. These emerging AI companions raise critical questions: Can such systems fulfill social needs typically met by human relationships? How do they shape psychological well-being? And what new risks arise as users develop emotional ties to non-human agents? This study investigates how people interact with AI companions, especially simulated partners on Character.AI, and how this use is associated with users' psychological well-being. We analyzed survey data from 1,131 users and 4,363 chat sessions (413,509 messages) donated by 244 participants, focusing on three dimensions of use: nature of the interaction, interaction intensity, and self-disclosure. By triangulating self-reports primary motivation, open-ended relationship descriptions, and annotated chat transcripts, we identify patterns in how users engage with AI companions and its associations with well-being. Findings suggest that people with smaller social networks are more likely to turn to chatbots for companionship, but that companionship-oriented chatbot usage is consistently associated with lower well-being, particularly when people use the chatbots more intensively, engage in higher levels of self-disclosure, and lack strong human social support. Even though some people turn to chatbots to fulfill social needs, these uses of chatbots do not fully substitute for human connection. As a result, the psychological benefits may be limited, and the relationship could pose risks for more socially isolated or emotionally vulnerable users.