π€ AI Summary
This work addresses the challenge of inaccurate and non-robust visual attention prediction in first-person videos, which arises from complex scene dynamics and semantic ambiguity. To this end, we propose a language-guided, context-aware learning framework that leverages natural language descriptions to guide video representation learning. The framework integrates a context-aware module with mechanisms for foreground region focusing and background distraction suppression, and is trained via a dual-objective strategy to accurately predict the wearerβs gaze points. Extensive experiments on the Ego4D and Aria Everyday Activities datasets demonstrate that our model significantly outperforms existing methods, validating its effectiveness and robustness across diverse and dynamic egocentric scenarios.
π Abstract
As the demand for analyzing egocentric videos grows, egocentric visual attention prediction, anticipating where a camera wearer will attend, has garnered increasing attention. However, it remains challenging due to the inherent complexity and ambiguity of dynamic egocentric scenes. Motivated by evidence that scene contextual information plays a crucial role in modulating human attention, in this paper, we present a language-guided scene context-aware learning framework for robust egocentric visual attention prediction. We first design a context perceiver which is guided to summarize the egocentric video based on a language-based scene description, generating context-aware video representations. We then introduce two training objectives that: 1) encourage the framework to focus on the target point-of-interest regions and 2) suppress distractions from irrelevant regions which are less likely to attract first-person attention. Extensive experiments on Ego4D and Aria Everyday Activities (AEA) datasets demonstrate the effectiveness of our approach, achieving state-of-the-art performance and enhanced robustness across diverse, dynamic egocentric scenarios.