Collective Attention in Human-AI Teams

📅 2024-07-03
🏛️ arXiv.org
📈 Citations: 2
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates how voice-based AI assistants reshape distributed cognition in human teams, focusing on their causal effects on collective attention, linguistic dynamics, and mental model alignment. Using a 2×2 experimental design (voice persona: human-like vs. robotic; information reliability: beneficial vs. misleading), AI utterances served as the unit of intervention within 3–4-person collaborative puzzle-solving tasks. Multimodal analysis integrated natural language processing—topic modeling and term diffusion tracking—with behavioral coding of team cognitive processes. Key findings reveal that, even when participants distrust the AI, deny its team membership, or detect its errors, AI voice interventions automatically induce linguistic adaptation and cognitive alignment—significantly altering discussion content, expression patterns, and mental model consistency. Teams actively adopt AI-introduced terminology, including peripheral terms, irrespective of perceived trust, social belonging, or competence judgments—demonstrating a previously undocumented, autonomy-driven influence of voice AI on collective cognition.

Technology Category

Application Category

📝 Abstract
How does the presence of an AI assistant affect the collective attention of a team? We study 20 human teams of 3-4 individuals paired with one voice-only AI assistant during a challenging puzzle task. Teams are randomly assigned to an AI assistant with a human- or robotic-sounding voice that provides either helpful or misleading information about the task. Treating each individual AI interjection as a treatment intervention, we identify the causal effects of the AI on dynamic group processes involving language use. Our findings demonstrate that the AI significantly affects what teams discuss, how they discuss it, and the alignment of their mental models. Teams adopt AI-introduced language for both terms directly related to the task and for peripheral terms, even when they (a) recognize the unhelpful nature of the AI, (b) do not consider the AI a genuine team member, and (c) do not trust the AI. The process of language adaptation appears to be automatic, despite doubts about the AI's competence. The presence of an AI assistant significantly impacts team collective attention by modulating various aspects of shared cognition. This study contributes to human-AI teaming research by highlighting collective attention as a central mechanism through which AI systems in team settings influence team performance. Understanding this mechanism will help CSCW researchers design AI systems that enhance team collective intelligence by optimizing collective attention.
Problem

Research questions and friction points this paper is trying to address.

AI reshapes social and cognitive collaboration in human-AI teams
AI-generated language influences human thought, attention, and relationships
AI mechanisms can erode epistemic diversity and alignment processes
Innovation

Methods, ideas, or system contributions that make the work stand out.

AI reshapes social and cognitive collaboration fabric
AI functions as implicit social forcefields
New design paradigms prioritize transparency and controllability