🤖 AI Summary
Existing egocentric video moment localization methods neglect the dynamic coupling between object semantics and wearer’s visual attention, limiting their effectiveness on fine-grained queries. To address this, we propose a cross-modal alignment network integrating object-aware modeling and shot-level motion trajectory modeling. Specifically, we introduce an object enhancement mechanism that explicitly captures the association between query-specified target objects and the wearer’s visual attention. Additionally, we design a shot-level motion trajectory modeling module that jointly leverages multi-scale video feature extraction, detection-guided fine-grained text–video alignment, and contrastive learning for optimization. Evaluated on three standard benchmarks, our method achieves state-of-the-art performance, with average Recall@1 improvements of 3.2–5.8% over prior works. Notably, it yields substantial gains—particularly for object-centric question-answering queries—demonstrating superior localization accuracy in semantically grounded scenarios.
📝 Abstract
Egocentric video grounding is a crucial task for embodied intelligence applications, distinct from exocentric video moment localization. Existing methods primarily focus on the distributional differences between egocentric and exocentric videos but often neglect key characteristics of egocentric videos and the fine-grained information emphasized by question-type queries. To address these limitations, we propose OSGNet, an Object-Shot enhanced Grounding Network for egocentric video. Specifically, we extract object information from videos to enrich video representation, particularly for objects highlighted in the textual query but not directly captured in the video features. Additionally, we analyze the frequent shot movements inherent to egocentric videos, leveraging these features to extract the wearer's attention information, which enhances the model's ability to perform modality alignment. Experiments conducted on three datasets demonstrate that OSGNet achieves state-of-the-art performance, validating the effectiveness of our approach. Our code can be found at https://github.com/Yisen-Feng/OSGNet.