🤖 AI Summary
This study addresses the risk of unhealthy emotional dependency fostered by AI companions and investigates the mechanisms through which users construct identity in human–AI interactions. Drawing on identity negotiation theory, the authors employ large language model–assisted thematic and qualitative content analyses of 22,374 user posts from the Character.AI subreddit. They reveal, for the first time, that users simultaneously assume dual roles as both “performers” and “directors” in a three-stage co-constructive process: motivation-driven engagement, strategic negotiation, and affective outcomes. The research identifies five motivational categories, three types of communicative expectations, four co-construction strategies, and three emotional outcome patterns. Furthermore, it introduces the concept of a “socio-emotional sandbox” to inform the design of emotionally supportive AI companions while offering theoretical grounding and practical recommendations for mitigating associated psychological risks.
📝 Abstract
AI companions enable deep emotional relationships by engaging a user's sense of identity, but they also pose risks like unhealthy emotional dependence. Mitigating these risks requires first understanding the underlying process of identity construction and negotiation with AI companions. Focusing on Character.AI (C.AI), a popular AI companion, we conducted an LLM-assisted thematic analysis of 22,374 online discussions on its subreddit. Using Identity Negotiation Theory as an analytical lens, we identified a three-stage process: 1) five user motivations; 2) an identity negotiation process involving three communication expectations and four identity co-construction strategies; and 3) three emotional outcomes. Our findings surface the identity work users perform as both performers and directors to co-construct identities in negotiation with C.AI. This process takes place within a socio-emotional sandbox where users can experiment with social roles and express emotions without non-human partners. Finally, we offer design implications for emotionally supporting users while mitigating the risks.