🤖 AI Summary
To address two key challenges in self-supervised video representation learning—(1) reconstruction ambiguity arising from stochastic temporal sampling, and (2) insufficient semantic compression due to pixel-level masked modeling—this paper proposes a novel framework. First, it introduces a “sandwich”-style deterministic temporal sampling strategy that explicitly models temporal dependencies between a central frame and its bilateral auxiliary frames, thereby mitigating reconstruction uncertainty. Second, it incorporates a latent-space self-distillation auxiliary branch that enables high-fidelity temporal reconstruction in a compact semantic space, enhancing representation discriminability. The method jointly optimizes masked video modeling and latent representation recovery, requiring no external annotations. Evaluated on multiple downstream tasks—including action recognition and temporal action localization—the framework consistently outperforms state-of-the-art methods, achieving significant gains in both accuracy and generalization.
📝 Abstract
The past decade has witnessed notable achievements in self-supervised learning for video tasks. Recent efforts typically adopt the Masked Video Modeling (MVM) paradigm, leading to significant progress on multiple video tasks. However, two critical challenges remain: 1) Without human annotations, the random temporal sampling introduces uncertainty, increasing the difficulty of model training. 2) Previous MVM methods primarily recover the masked patches in the pixel space, leading to insufficient information compression for downstream tasks. To address these challenges jointly, we propose a self-supervised framework that leverages Temporal Correspondence for video Representation learning (T-CoRe). For challenge 1), we propose a sandwich sampling strategy that selects two auxiliary frames to reduce reconstruction uncertainty in a two-side-squeezing manner. Addressing challenge 2), we introduce an auxiliary branch into a self-distillation architecture to restore representations in the latent space, generating high-level semantic representations enriched with temporal information. Experiments of T-CoRe consistently present superior performance across several downstream tasks, demonstrating its effectiveness for video representation learning. The code is available at https://github.com/yafeng19/T-CORE.