EgoEverything: A Benchmark for Human Behavior Inspired Long Context Egocentric Video Understanding in AR Environment

πŸ“… 2026-04-09
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work addresses a critical limitation in existing egocentric video understanding benchmarks, which often overlook users’ underlying behaviors when constructing queries and thus struggle to support reasoning about long-duration, unstructured activities in augmented reality (AR) settings. To bridge this gap, the authors introduce the first long-context egocentric video understanding benchmark specifically designed for AR scenarios. Their approach innovatively integrates human gaze-derived attention signals explicitly into the question generation process, enabling more authentic modeling of human cognition and interaction patterns. The benchmark comprises over 100 hours of video data and more than 5,000 behavior-driven multiple-choice question-answer pairs, substantially enhancing both the realism and challenge of evaluation, and offering a more practical platform for assessing video understanding in AR environments.
πŸ“ Abstract
Long context egocentric video understanding has recently attracted significant research attention, with augmented reality (AR) highlighted as one of its most important application domains. Nevertheless, the task remains highly challenging due to the need for reasoning over extended temporal contexts and diverse, unstructured activities. Although several benchmarks exist, most egocentric datasets rely on human worn cameras and focus mainly on visual content, with limited consideration of underlying user behavior when forming video-related queries. EgoEverything is a benchmark that explicitly considers human behavior by leveraging human attention signals, abstracted from gaze data, when generating questions. It comprises over 5,000 multiple choice question answer pairs, spanning more than 100 hours of video. By integrating human attention signals during question generation, it more faithfully captures natural human behavior and offers a realistic evaluation setting for long-context egocentric video understanding in AR.
Problem

Research questions and friction points this paper is trying to address.

egocentric video understanding
long context
human behavior
augmented reality
attention signals
Innovation

Methods, ideas, or system contributions that make the work stand out.

egocentric video understanding
human attention signals
augmented reality
long-context reasoning
behavior-inspired benchmark
πŸ”Ž Similar Papers
No similar papers found.