🤖 AI Summary
This work addresses the lack of benchmarks and methods for fine-grained spatiotemporal localization in egocentric videos. We introduce EgoMask, the first pixel-level spatiotemporal localization benchmark for egocentric vision, accompanied by a large-scale training dataset, EgoMask-Train. To tackle challenges inherent to egocentric data—including short object durations, sparse trajectories, small object sizes, and large positional variability—we design an automated annotation pipeline that generates referring expressions and precise object masks for multi-duration video clips. Experiments reveal that state-of-the-art models suffer substantial performance degradation on EgoMask, yet achieve significant gains after fine-tuning on EgoMask-Train. Moreover, models trained on EgoMask-Train demonstrate strong cross-dataset generalization—transferring effectively from egocentric to exocentric settings—validating EgoMask’s utility in advancing egocentric understanding and cross-view transfer learning.
📝 Abstract
Spatiotemporal video grounding aims to localize target entities in videos based on textual queries. While existing research has made significant progress in exocentric videos, the egocentric setting remains relatively underexplored, despite its growing importance in applications such as augmented reality and robotics. In this work, we conduct a systematic analysis of the discrepancies between egocentric and exocentric videos, revealing key challenges such as shorter object durations, sparser trajectories, smaller object sizes, and larger positional shifts. To address these challenges, we introduce EgoMask, the first pixel-level benchmark for fine-grained spatiotemporal grounding in egocentric videos. It is constructed by our proposed automatic annotation pipeline, which annotates referring expressions and object masks across short-, medium-, and long-term videos. Additionally, we create EgoMask-Train, a large-scale training dataset to facilitate model development. Experiments demonstrate that the state-of-the-art spatiotemporal grounding models perform poorly on our benchmark EgoMask, but fine-tuning on EgoMask-Train yields significant improvements, while preserving performance on exocentric datasets. Our work thus provides essential resources and insights for advancing egocentric video understanding. Our code is available at https://github.com/LaVi-Lab/EgoMask .