🤖 AI Summary
Knowledge workers engaging with anthropomorphic AI frequently experience tension between instrumental cognition and social interaction, resulting in inconsistent relational stances and ontological ambiguity—a phenomenon termed “relational dysregulation.” This study investigates this issue through three collaborative workshops with qualitative researchers, employing situated interaction analysis grounded in phenomenological and social constructionist perspectives. It introduces “relational dysregulation” as a novel conceptual framework that transcends conventional human–AI interaction models, empirically revealing systematic discrepancies between users’ explicit relational stances and their actual interactive behaviors. The findings advocate for relationship transparency as a core design and governance principle for anthropomorphic AI systems. By bridging theory and practice, this work provides actionable theoretical foundations and implementation pathways for the ethical development, human-centered design, and policy formulation of anthropomorphic AI. (149 words)
📝 Abstract
When AI systems allow human-like communication, they elicit increasingly complex relational responses. Knowledge workers face a particular challenge: They approach these systems as tools while interacting with them in ways that resemble human social interaction. To understand the relational contexts that arise when humans engage with anthropomorphic conversational agents, we need to expand existing human-computer interaction frameworks. Through three workshops with qualitative researchers, we found that the fundamental ontological and relational ambiguities inherent in anthropomorphic conversational agents make it difficult for individuals to maintain consistent relational stances toward them. Our findings indicate that people's articulated positioning toward such agents often differs from the relational dynamics that occur during interactions. We propose the concept of relational dissonance to help researchers, designers, and policymakers recognize the resulting tensions in the development, deployment, and governance of anthropomorphic conversational agents and address the need for relational transparency.