๐ค AI Summary
This work proposes the first multimodal large language model (LLM) assistant that integrates eye-tracking data with first-person video to perceive usersโ cognitive difficulties during interaction. By analyzing real-time gaze behavior, the system models the userโs cognitive state to accurately identify comprehension bottlenecks and provide retrospective, personalized assistance. This approach pioneers the incorporation of eye movement signals into LLM-based interactive frameworks, significantly enhancing both the precision of support and usersโ information recall. Experimental results demonstrate that, compared to text-only LLM assistants, the proposed system achieves higher accuracy and personalization scores, while also reducing the amount of user input required and improving overall interaction efficiency.
๐ Abstract
Current LLM assistants are powerful at answering questions, but they have limited access to the behavioral context that reveals when and where a user is struggling. We present a gaze-grounded multimodal LLM assistant that uses egocentric video with gaze overlays to identify likely points of difficulty and target follow-up retrospective assistance. We instantiate this vision in a controlled study (n=36) comparing the gaze-aware AI assistant to a text-only LLM assistant. Compared to a conventional LLM assistant, the gaze-aware assistant was rated as significantly more accurate and personalized in its assessments of users' reading behavior and significantly improved people's ability to recall information. Users spoke significantly fewer words with the gaze-aware assistant, indicating more efficient interactions. Qualitative results underscored both perceived benefits in comprehension and challenges when interpretations of gaze behaviors were inaccurate. Our findings suggest that gaze-aware LLM assistants can reason about cognitive needs to improve cognitive outcomes of users.