🤖 AI Summary
This study addresses the lack of rigorous evaluation frameworks for human-AI collaboration in physical tasks (e.g., cooking, battlefield medicine), hindered by fragmented assessment paradigms and scarce multimodal interaction data. We propose the first systematic evaluation framework for AI-assisted robots operating within human-in-the-loop settings, integrating augmented reality (AR)-based real-time guidance, multimodal perception (vision, audio, motion), and human–robot interaction modeling. Our framework enables deployment of interactive AI guidance systems in realistic environments and introduces a corresponding multimodal interaction dataset. Empirical evaluation demonstrates that AI assistance significantly improves task efficiency (+28%) and accuracy (41% reduction in error rate), while also enhancing procedural knowledge transfer and long-term skill retention. This work establishes a methodological foundation and provides empirical validation for AI-augmented human–robot collaboration in high-stakes, precision-critical physical domains.
📝 Abstract
Effective human-AI collaboration for physical task completion has significant potential in both everyday activities and professional domains. AI agents equipped with informative guidance can enhance human performance, but evaluating such collaboration remains challenging due to the complexity of human-in-the-loop interactions. In this work, we introduce an evaluation framework and a multimodal dataset of human-AI interactions designed to assess how AI guidance affects procedural task performance, error reduction and learning outcomes. Besides, we develop an augmented reality (AR)-equipped AI agent that provides interactive guidance in real-world tasks, from cooking to battlefield medicine. Through human studies, we share empirical insights into AI-assisted human performance and demonstrate that AI-assisted collaboration improves task completion.