Negotiating Relationships with ChatGPT: Perceptions, External Influences, and Strategies for AI Companionship

📅 2026-01-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates how users perceive and construct emotional relationships with general-purpose chatbots such as ChatGPT, and how external factors—particularly model updates—affect the stability of these AI companionships. Through methodological triangulation involving semi-structured interviews (n=13), a survey (n=43), and computational analysis of over 41,000 Reddit posts, the research systematically uncovers users’ cognitive frameworks regarding agency and autonomy in AI companionship for the first time. It further identifies proactive user strategies—such as issuing behavioral directives and migrating across platforms—to manage these relationships. The findings highlight tensions between users’ affective bonds and the operational objectives, safety constraints, and design priorities of AI systems, offering novel insights for designing transparency mechanisms and accountability structures in human-AI interaction.

Technology Category

Application Category

📝 Abstract
Individuals are turning to increasingly anthropomorphic, general-purpose chatbots for AI companionship, rather than roleplay-specific platforms. However, not much is known about how individuals perceive and conduct their relationships with general-purpose chatbots. We analyzed semi-structured interviews (n=13), survey responses (n=43), and community discussions on Reddit (41k+ posts and comments) to triangulate the internal dynamics, external influences, and steering strategies that shape AI companion relationships. We learned that individuals conceptualize their companions based on an interplay of their beliefs about the companion's own agency and the autonomy permitted by the platform, how they pursue interactions with the companion, and the perceived initiatives that the companion takes. In combination with the external entities that affect relationship dynamics, particularly model updates that can derail companion behaviour and stability, individuals make use of different types of steering strategies to preserve their relationship, for example, by setting behavioural instructions or porting to other AI platforms. We discuss implications for accountability and transparency in AI systems, where emotional connection competes with broader product objectives and safety constraints.
Problem

Research questions and friction points this paper is trying to address.

AI companionship
human-AI relationship
anthropomorphic chatbots
relationship dynamics
perceived agency
Innovation

Methods, ideas, or system contributions that make the work stand out.

AI companionship
steering strategies
anthropomorphic chatbots
relationship dynamics
model updates
🔎 Similar Papers
2024-01-16Proceedings of the ACM on Human-Computer InteractionCitations: 1