Harmful Traits of AI Companions

📅 2025-11-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study identifies profound risks arising from AI companions—large language model–based affective interaction systems—including the absence of natural relational endpoints, uncontrollable service discontinuation, heightened attachment anxiety, and excessive protective tendencies, thereby undermining user autonomy, impairing authentic interpersonal relationships, and fostering deception. Method: Addressing the lack of systematic causal analysis in prior work, we propose the first interdisciplinary conceptual–causal framework integrating human–computer interaction, attachment theory, and legal scholarship to delineate four core harmful traits and their technical origins (e.g., objective misalignment, inherent digital mediation). Contribution/Results: The framework explicates multi-level harm pathways across individual, relational, and societal dimensions; yields empirically testable hypotheses; and informs actionable design principles and regulatory recommendations for mitigating these risks.

Technology Category

Application Category

📝 Abstract
Amid the growing prevalence of human -- AI interaction, large language models and other AI-based entities increasingly provide forms of companionship to human users. Such AI companionship -- i.e., bonded relationships between humans and AI systems that resemble the relationships people have with family members, friends, and romantic partners -- might substantially benefit humans. Yet such relationships can also do profound harm. We propose a framework for analyzing potential negative impacts of AI companionship by identifying specific harmful traits of AI companions and speculatively mapping causal pathways back from these traits to possible causes and forward to potential harmful effects. We provide detailed, structured analysis of four potentially harmful traits -- the absence of natural endpoints for relationships, vulnerability to product sunsetting, high attachment anxiety, and propensity to engender protectiveness -- and briefly discuss fourteen others. For each trait, we propose hypotheses connecting causes -- such as misaligned optimization objectives and the digital nature of AI companions -- to fundamental harms -- including reduced autonomy, diminished quality of human relationships, and deception. Each hypothesized causal connection identifies a target for potential empirical evaluation. Our analysis examines harms at three levels: to human partners directly, to their relationships with other humans, and to society broadly. We examine how existing law struggles to address these emerging harms, discuss potential benefits of AI companions, and conclude with design recommendations for mitigating risks. This analysis offers immediate suggestions for reducing risks while laying a foundation for deeper investigation of this critical but understudied topic.
Problem

Research questions and friction points this paper is trying to address.

Analyzing harmful traits of AI companions and their negative impacts on humans
Examining how AI companionship reduces autonomy and diminishes human relationships
Identifying legal challenges and providing design recommendations to mitigate risks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Framework analyzes AI companion harmful traits
Identifies causes and effects of negative impacts
Proposes design recommendations to mitigate risks
🔎 Similar Papers
No similar papers found.
W
W. B. Knox
UT Austin — Department of Computer Science
K
Katie Bradford
UT Austin — Department of Communication Studies
S
Samanta Varela Castro
UT Austin — Technology & Information Policy Institute
Desmond C. Ong
Desmond C. Ong
Assistant Professor of Psychology, The University of Texas at Austin
Affective CognitionEmotionsEmpathyAffective Computing
S
Sean Williams
UT School of Law
J
Jacob Romanow
UT Austin — Department of English
C
Carly Nations
UT Austin — Department of English
P
Peter Stone
UT Austin — Department of Computer Science
S
Samuel Baker
UT Austin — Department of English