🤖 AI Summary
This study investigates the behavioral mechanisms and drivers—such as evolving trust, task complexity, and perceived control—underlying dynamic control authority switching between humans and AI in collaborative settings.
Method: Leveraging a hand–brain division-of-labor paradigm in chess, we conducted experiments integrating eye-tracking, affective state recognition, and quantitative task difficulty assessment. We developed a lightweight predictive model that fuses fine-grained behavioral features (e.g., gaze patterns) with task-level features.
Contribution/Results: Evaluated on real-world switching decisions from eight participants (400+ instances), the model achieves an F1-score of 0.65, demonstrating that real-time behavioral signals can effectively support adaptive control allocation in shared autonomy systems. Our key innovation lies in the first coupling of granular cognitive-behavioral indicators (eye movements and affect) with subtask-level complexity for modeling. This provides both a novel methodology and empirical foundation for explainable, adaptive human–AI co-control.
📝 Abstract
Human-AI collaboration is typically offered in one of two of user control levels: guidance, where the AI provides suggestions and the human makes the final decision, and delegation, where the AI acts autonomously within user-defined constraints. Systems that integrate both modes, common in robotic surgery or driving assistance, often overlook shifts in user preferences within a task in response to factors like evolving trust, decision complexity, and perceived control. In this work, we investigate how users dynamically switch between higher and lower levels of control during a sequential decision-making task. Using a hand-and-brain chess setup, participants either selected a piece and the AI decided how it moved (brain mode), or the AI selected a piece and the participant decided how it moved (hand mode). We collected over 400 mode-switching decisions from eight participants, along with gaze, emotional state, and subtask difficulty data. Statistical analysis revealed significant differences in gaze patterns and subtask complexity prior to a switch and in the quality of the subsequent move. Based on these results, we engineered behavioral and task-specific features to train a lightweight model that predicted control level switches ($F1 = 0.65$). The model performance suggests that real-time behavioral signals can serve as a complementary input alongside system-driven mode-switching mechanisms currently used. We complement our quantitative results with qualitative factors that influence switching including perceived AI ability, decision complexity, and level of control, identified from post-game interview analysis. The combined behavioral and modeling insights can help inform the design of shared autonomy systems that need dynamic, subtask-level control switches aligned with user intent and evolving task demands.