π€ AI Summary
This work addresses a critical limitation in existing egocentric video understanding benchmarks, which often overlook usersβ underlying behaviors when constructing queries and thus struggle to support reasoning about long-duration, unstructured activities in augmented reality (AR) settings. To bridge this gap, the authors introduce the first long-context egocentric video understanding benchmark specifically designed for AR scenarios. Their approach innovatively integrates human gaze-derived attention signals explicitly into the question generation process, enabling more authentic modeling of human cognition and interaction patterns. The benchmark comprises over 100 hours of video data and more than 5,000 behavior-driven multiple-choice question-answer pairs, substantially enhancing both the realism and challenge of evaluation, and offering a more practical platform for assessing video understanding in AR environments.
π Abstract
Long context egocentric video understanding has recently attracted significant research attention, with augmented reality (AR) highlighted as one of its most important application domains. Nevertheless, the task remains highly challenging due to the need for reasoning over extended temporal contexts and diverse, unstructured activities. Although several benchmarks exist, most egocentric datasets rely on human worn cameras and focus mainly on visual content, with limited consideration of underlying user behavior when forming video-related queries. EgoEverything is a benchmark that explicitly considers human behavior by leveraging human attention signals, abstracted from gaze data, when generating questions. It comprises over 5,000 multiple choice question answer pairs, spanning more than 100 hours of video. By integrating human attention signals during question generation, it more faithfully captures natural human behavior and offers a realistic evaluation setting for long-context egocentric video understanding in AR.