Understanding Mode Switching in Human-AI Collaboration: Behavioral Insights and Predictive Modeling

📅 2025-09-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates the behavioral mechanisms and drivers—such as evolving trust, task complexity, and perceived control—underlying dynamic control authority switching between humans and AI in collaborative settings. Method: Leveraging a hand–brain division-of-labor paradigm in chess, we conducted experiments integrating eye-tracking, affective state recognition, and quantitative task difficulty assessment. We developed a lightweight predictive model that fuses fine-grained behavioral features (e.g., gaze patterns) with task-level features. Contribution/Results: Evaluated on real-world switching decisions from eight participants (400+ instances), the model achieves an F1-score of 0.65, demonstrating that real-time behavioral signals can effectively support adaptive control allocation in shared autonomy systems. Our key innovation lies in the first coupling of granular cognitive-behavioral indicators (eye movements and affect) with subtask-level complexity for modeling. This provides both a novel methodology and empirical foundation for explainable, adaptive human–AI co-control.

Technology Category

Application Category

📝 Abstract
Human-AI collaboration is typically offered in one of two of user control levels: guidance, where the AI provides suggestions and the human makes the final decision, and delegation, where the AI acts autonomously within user-defined constraints. Systems that integrate both modes, common in robotic surgery or driving assistance, often overlook shifts in user preferences within a task in response to factors like evolving trust, decision complexity, and perceived control. In this work, we investigate how users dynamically switch between higher and lower levels of control during a sequential decision-making task. Using a hand-and-brain chess setup, participants either selected a piece and the AI decided how it moved (brain mode), or the AI selected a piece and the participant decided how it moved (hand mode). We collected over 400 mode-switching decisions from eight participants, along with gaze, emotional state, and subtask difficulty data. Statistical analysis revealed significant differences in gaze patterns and subtask complexity prior to a switch and in the quality of the subsequent move. Based on these results, we engineered behavioral and task-specific features to train a lightweight model that predicted control level switches ($F1 = 0.65$). The model performance suggests that real-time behavioral signals can serve as a complementary input alongside system-driven mode-switching mechanisms currently used. We complement our quantitative results with qualitative factors that influence switching including perceived AI ability, decision complexity, and level of control, identified from post-game interview analysis. The combined behavioral and modeling insights can help inform the design of shared autonomy systems that need dynamic, subtask-level control switches aligned with user intent and evolving task demands.
Problem

Research questions and friction points this paper is trying to address.

Investigating dynamic user switching between AI control levels during decision-making tasks
Predicting control level switches using behavioral signals and task-specific features
Designing shared autonomy systems with dynamic control aligned with user intent
Innovation

Methods, ideas, or system contributions that make the work stand out.

Hand-and-brain chess setup for mode-switching study
Behavioral and task features predict control level switches
Real-time signals complement system-driven switching mechanisms
🔎 Similar Papers
No similar papers found.
A
Avinash Ajit Nargund
University of California, Santa Barbara
Arthur Caetano
Arthur Caetano
University of California, Santa Barbara
Extended RealityGrasp-Based InterfacesHuman-AI InteractionToolkits
Kevin Yang
Kevin Yang
UC Berkeley
natural language processingcontrolled generationlong-form generation
R
Rose Yiwei Liu
Washington University, Saint Louis
P
Philip Tezaur
University of California, Santa Barbara
K
Kriteen Shrestha
University of California, Santa Barbara
Q
Qisen Pan
University of California, Santa Barbara
Tobias Höllerer
Tobias Höllerer
Professor, Computer Science, UC Santa Barbara
human-computer interactionaugmented realityvirtual realityinformation visualizationsocial computing
Misha Sra
Misha Sra
UCSB
Spatial Human-AI InteractionXRHaptics