TrackMAE: Video Representation Learning via Track Mask and Predict

πŸ“… 2026-03-28
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Existing masked video modeling approaches implicitly encode motion information, making it difficult to capture fine-grained temporal dynamics and thereby limiting their performance on motion-sensitive tasks. To address this limitation, this work proposes TrackMAE, the first framework to explicitly incorporate sparse motion trajectories as a reconstruction signal in masked video modeling. Specifically, off-the-shelf point trackers are leveraged to extract motion trajectories, a motion-aware tubular masking strategy is introduced, and a joint reconstruction objective spanning pixel, feature, and motion spaces is formulated. This design significantly enhances the model’s capacity to learn temporal dynamics, consistently outperforming state-of-the-art self-supervised methods across six downstream datasets and yielding more discriminative and generalizable representations.
πŸ“ Abstract
Masked video modeling (MVM) has emerged as a simple and scalable self-supervised pretraining paradigm, but only encodes motion information implicitly, limiting the encoding of temporal dynamics in the learned representations. As a result, such models struggle on motion-centric tasks that require fine-grained motion awareness. To address this, we propose TrackMAE, a simple masked video modeling paradigm that explicitly uses motion information as a reconstruction signal. In TrackMAE, we use an off-the-shelf point tracker to sparsely track points in the input videos, generating motion trajectories. Furthermore, we exploit the extracted trajectories to improve random tube masking with a motion-aware masking strategy. We enhance video representations learned in both pixel and feature semantic reconstruction spaces by providing a complementary supervision signal in the form of motion targets. We evaluate on six datasets across diverse downstream settings and find that TrackMAE consistently outperforms state-of-the-art video self-supervised learning baselines, learning more discriminative and generalizable representations. Code available at https://github.com/rvandeghen/TrackMAE
Problem

Research questions and friction points this paper is trying to address.

masked video modeling
motion awareness
temporal dynamics
self-supervised learning
video representation
Innovation

Methods, ideas, or system contributions that make the work stand out.

masked video modeling
motion-aware masking
trajectory tracking
self-supervised learning
video representation learning
πŸ”Ž Similar Papers
No similar papers found.