Fine-grained Spatiotemporal Grounding on Egocentric Videos

📅 2025-08-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the lack of benchmarks and methods for fine-grained spatiotemporal localization in egocentric videos. We introduce EgoMask, the first pixel-level spatiotemporal localization benchmark for egocentric vision, accompanied by a large-scale training dataset, EgoMask-Train. To tackle challenges inherent to egocentric data—including short object durations, sparse trajectories, small object sizes, and large positional variability—we design an automated annotation pipeline that generates referring expressions and precise object masks for multi-duration video clips. Experiments reveal that state-of-the-art models suffer substantial performance degradation on EgoMask, yet achieve significant gains after fine-tuning on EgoMask-Train. Moreover, models trained on EgoMask-Train demonstrate strong cross-dataset generalization—transferring effectively from egocentric to exocentric settings—validating EgoMask’s utility in advancing egocentric understanding and cross-view transfer learning.

Technology Category

Application Category

📝 Abstract
Spatiotemporal video grounding aims to localize target entities in videos based on textual queries. While existing research has made significant progress in exocentric videos, the egocentric setting remains relatively underexplored, despite its growing importance in applications such as augmented reality and robotics. In this work, we conduct a systematic analysis of the discrepancies between egocentric and exocentric videos, revealing key challenges such as shorter object durations, sparser trajectories, smaller object sizes, and larger positional shifts. To address these challenges, we introduce EgoMask, the first pixel-level benchmark for fine-grained spatiotemporal grounding in egocentric videos. It is constructed by our proposed automatic annotation pipeline, which annotates referring expressions and object masks across short-, medium-, and long-term videos. Additionally, we create EgoMask-Train, a large-scale training dataset to facilitate model development. Experiments demonstrate that the state-of-the-art spatiotemporal grounding models perform poorly on our benchmark EgoMask, but fine-tuning on EgoMask-Train yields significant improvements, while preserving performance on exocentric datasets. Our work thus provides essential resources and insights for advancing egocentric video understanding. Our code is available at https://github.com/LaVi-Lab/EgoMask .
Problem

Research questions and friction points this paper is trying to address.

Localize target entities in egocentric videos using text queries
Address challenges like short object durations and sparse trajectories
Develop benchmark and dataset for egocentric video grounding
Innovation

Methods, ideas, or system contributions that make the work stand out.

Introduces EgoMask for pixel-level egocentric video grounding
Proposes automatic annotation pipeline for diverse video durations
Creates large-scale EgoMask-Train dataset for model fine-tuning
🔎 Similar Papers
No similar papers found.
S
Shuo Liang
The Chinese University of Hong Kong
Yiwu Zhong
Yiwu Zhong
CUHK / University of Wisconsin-Madison
Vision-Language LearningMulti-Modal ModelsEmbodied AI
Zi-Yuan Hu
Zi-Yuan Hu
The Chinese University of Hong Kong
Multimodal LearningNatural Language ProcessingParameter-Efficient Tuning
Y
Yeyao Tao
The Chinese University of Hong Kong
L
Liwei Wang
The Chinese University of Hong Kong