Robust Egocentric Visual Attention Prediction Through Language-guided Scene Context-aware Learning

πŸ“… 2026-01-05
πŸ›οΈ arXiv.org
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work addresses the challenge of inaccurate and non-robust visual attention prediction in first-person videos, which arises from complex scene dynamics and semantic ambiguity. To this end, we propose a language-guided, context-aware learning framework that leverages natural language descriptions to guide video representation learning. The framework integrates a context-aware module with mechanisms for foreground region focusing and background distraction suppression, and is trained via a dual-objective strategy to accurately predict the wearer’s gaze points. Extensive experiments on the Ego4D and Aria Everyday Activities datasets demonstrate that our model significantly outperforms existing methods, validating its effectiveness and robustness across diverse and dynamic egocentric scenarios.

Technology Category

Application Category

πŸ“ Abstract
As the demand for analyzing egocentric videos grows, egocentric visual attention prediction, anticipating where a camera wearer will attend, has garnered increasing attention. However, it remains challenging due to the inherent complexity and ambiguity of dynamic egocentric scenes. Motivated by evidence that scene contextual information plays a crucial role in modulating human attention, in this paper, we present a language-guided scene context-aware learning framework for robust egocentric visual attention prediction. We first design a context perceiver which is guided to summarize the egocentric video based on a language-based scene description, generating context-aware video representations. We then introduce two training objectives that: 1) encourage the framework to focus on the target point-of-interest regions and 2) suppress distractions from irrelevant regions which are less likely to attract first-person attention. Extensive experiments on Ego4D and Aria Everyday Activities (AEA) datasets demonstrate the effectiveness of our approach, achieving state-of-the-art performance and enhanced robustness across diverse, dynamic egocentric scenarios.
Problem

Research questions and friction points this paper is trying to address.

egocentric visual attention prediction
scene context
dynamic egocentric scenes
visual attention modeling
first-person vision
Innovation

Methods, ideas, or system contributions that make the work stand out.

language-guided learning
scene context-awareness
egocentric visual attention
context perceiver
robust attention prediction
πŸ”Ž Similar Papers
No similar papers found.