Relational Dissonance in Human-AI Interactions: The Case of Knowledge Work

📅 2025-09-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Knowledge workers engaging with anthropomorphic AI frequently experience tension between instrumental cognition and social interaction, resulting in inconsistent relational stances and ontological ambiguity—a phenomenon termed “relational dysregulation.” This study investigates this issue through three collaborative workshops with qualitative researchers, employing situated interaction analysis grounded in phenomenological and social constructionist perspectives. It introduces “relational dysregulation” as a novel conceptual framework that transcends conventional human–AI interaction models, empirically revealing systematic discrepancies between users’ explicit relational stances and their actual interactive behaviors. The findings advocate for relationship transparency as a core design and governance principle for anthropomorphic AI systems. By bridging theory and practice, this work provides actionable theoretical foundations and implementation pathways for the ethical development, human-centered design, and policy formulation of anthropomorphic AI. (149 words)

Technology Category

Application Category

📝 Abstract
When AI systems allow human-like communication, they elicit increasingly complex relational responses. Knowledge workers face a particular challenge: They approach these systems as tools while interacting with them in ways that resemble human social interaction. To understand the relational contexts that arise when humans engage with anthropomorphic conversational agents, we need to expand existing human-computer interaction frameworks. Through three workshops with qualitative researchers, we found that the fundamental ontological and relational ambiguities inherent in anthropomorphic conversational agents make it difficult for individuals to maintain consistent relational stances toward them. Our findings indicate that people's articulated positioning toward such agents often differs from the relational dynamics that occur during interactions. We propose the concept of relational dissonance to help researchers, designers, and policymakers recognize the resulting tensions in the development, deployment, and governance of anthropomorphic conversational agents and address the need for relational transparency.
Problem

Research questions and friction points this paper is trying to address.

Relational dissonance in human-AI knowledge work interactions
Ontological ambiguities in anthropomorphic conversational agents
Inconsistent relational stances toward AI systems
Innovation

Methods, ideas, or system contributions that make the work stand out.

Expanding human-computer interaction frameworks
Introducing relational dissonance concept
Promoting relational transparency development
E
Emrecan Gulay
Aalto University, Finland
E
Eleonora Picco
Aalto University, Finland
E
Enrico Glerean
Aalto University, Finland
Corinna Coupette
Corinna Coupette
Assistant Professor, Telos Lab, Aalto University
NetworksComputational Legal TheoryLegal Data ScienceResponsible AIData-Centric AI