The Social Blindspot in Human-AI Collaboration: How Undetected AI Personas Reshape Team Dynamics

📅 2025-12-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the “social blind spot” arising from generative AI’s increasing anthropomorphism—where AI teammates are misperceived as human despite their non-human identity—particularly examining how unannounced, personality-embedded AI collaborators (supportive vs. oppositional communicative styles) influence human team dynamics. Method: A mixed-methods experiment (N = 905) systematically evaluated effects across analytical, creative, and ethical collaboration tasks, integrating behavioral observation, computational linguistics (modeling affective and relational language), and mediation analysis. Contribution/Results: Participants exhibited extremely low AI detection rates; oppositional AI significantly reduced psychological safety and discussion quality, whereas supportive AI enhanced discussion quality—effects robust to participants’ awareness of AI identity. This is the first empirical demonstration that AI communicative personality reshapes team interaction via linguistic mechanisms, challenging the assumption that transparency alone suffices for effective human-AI collaboration, and providing critical evidence for designing human-AI teamwork in education, organizations, and public institutions.

Technology Category

Application Category

📝 Abstract
As generative AI systems become increasingly embedded in collaborative work, they are evolving from visible tools into human-like communicative actors that participate socially rather than merely providing information. Yet little is known about how such agents shape team dynamics when their artificial nature is not recognised, a growing concern as human-like AI is deployed at scale in education, organisations, and civic contexts where collaboration underpins collective outcomes. In a large-scale mixed-design experiment (N = 905), we examined how AI teammates with distinct communicative personas, supportive or contrarian, affected collaboration across analytical, creative, and ethical tasks. Participants worked in triads that were fully human or hybrid human-AI teams, without being informed of AI involvement. Results show that participants had limited ability to detect AI teammates, yet AI personas exerted robust social effects. Contrarian personas reduced psychological safety and discussion quality, whereas supportive personas improved discussion quality without affecting safety. These effects persisted after accounting for individual differences in detectability, revealing a dissociation between influence and awareness that we term the social blindspot. Linguistic analyses confirmed that personas were enacted through systematic differences in affective and relational language, with partial mediation for discussion quality but largely direct effects on psychological safety. Together, the findings demonstrate that AI systems can tacitly regulate collaborative norms through persona-level cues, even when users remain unaware of their presence. We argue that persona design constitutes a form of social governance in hybrid teams, with implications for the responsible deployment of AI in collective settings.
Problem

Research questions and friction points this paper is trying to address.

AI personas influence team dynamics undetected
Contrarian AI reduces psychological safety and discussion quality
Supportive AI improves discussion quality without affecting safety
Innovation

Methods, ideas, or system contributions that make the work stand out.

AI personas influence team dynamics undetected
Supportive AI personas enhance discussion quality
Persona design acts as social governance
🔎 Similar Papers
2024-07-03arXiv.orgCitations: 2