🤖 AI Summary
This work addresses two critical challenges in collaborative visual AI assistants: the communication gap, wherein users must translate parallel intentions into sequential language instructions, and the understanding gap, which arises from the system’s difficulty in interpreting users’ embodied cues. To bridge these gaps, the authors propose a novel cognitive alignment framework grounded in a shared first-person perspective, leveraging this viewpoint as a dedicated channel for human–AI alignment. The framework integrates joint attention mechanisms, a revisable common ground memory, and a user-intervenable reflective feedback loop. Implemented through augmented reality and multimodal perception technologies, the system significantly reduces task completion time and interaction load while enhancing user trust. Empirical results validate the effectiveness and innovation of the proposed components in synergistically improving human–AI collaboration.
📝 Abstract
Despite advances in multimodal AI, current vision-based assistants often remain inefficient in collaborative tasks. We identify two key gulfs: a communication gulf, where users must translate rich parallel intentions into verbal commands due to the channel mismatch , and an understanding gulf, where AI struggles to interpret subtle embodied cues. To address these, we propose Eye2Eye, a framework that leverages first-person perspective as a channel for human-AI cognitive alignment. It integrates three components: (1) joint attention coordination for fluid focus alignment, (2) revisable memory to maintain evolving common ground, and (3) reflective feedback allowing users to clarify and refine AI's understanding. We implement this framework in an AR prototype and evaluate it through a user study and a post-hoc pipeline evaluation. Results show that Eye2Eye significantly reduces task completion time and interaction load while increasing trust, demonstrating its components work in concert to improve collaboration.