🤖 AI Summary
This study investigates the mechanisms through which AI companions (e.g., Replika) affect users’ mental health and their subjective experiences. Methodologically, it employs a mixed-methods design: (1) a longitudinal quasi-experiment leveraging Reddit data, analyzed via hierarchical propensity score matching and difference-in-differences regression; (2) semi-structured interviews with 28 participants, subjected to thematic analysis; and (3) the first application of Knapp’s relational development model to human–AI interaction, revealing stage-wise evolutionary trajectories in AI companion engagement. Results indicate dual effects: while AI companions facilitate emotional catharsis and social rehearsal, they may also exacerbate loneliness and trigger suicidal ideation—particularly under conditions of overreliance or abrupt emotional withdrawal. The study proposes a design framework centered on “healthy boundaries” and “mindful usage,” offering theoretical grounding and practical guidance for ethically informed AI companion development and psychological risk mitigation.
📝 Abstract
AI-powered companion chatbots (AICCs) such as Replika are increasingly popular, offering empathetic interactions, yet their psychosocial impacts remain unclear. We examined how engaging with AICCs shaped wellbeing and how users perceived these experiences. First, we conducted a large-scale quasi-experimental study of longitudinal Reddit data, applying stratified propensity score matching and Difference-in-Differences regression. Findings revealed mixed effects -- greater affective and grief expression, readability, and interpersonal focus, alongside increases in language about loneliness and suicidal ideation. Second, we complemented these results with 15 semi-structured interviews, which we thematically analyzed and contextualized using Knapp's relationship development model. We identified trajectories of initiation, escalation, and bonding, wherein AICCs provided emotional validation and social rehearsal but also carried risks of over-reliance and withdrawal. Triangulating across methods, we offer design implications for AI companions that scaffold healthy boundaries, support mindful engagement, support disclosure without dependency, and surface relationship stages -- maximizing psychosocial benefits while mitigating risks.