🤖 AI Summary
Current unsupervised video pretraining methods face dual bottlenecks: caption-based synthetic text supervision offers limited semantic coverage, failing to capture implicit world knowledge—such as motion dynamics, 3D geometry, and physical laws—while masked video modeling (MVM) leverages spatiotemporal structure but suffers from semantic conflicts in pixel-level reconstruction or shortcut learning in latent-space prediction. To address these issues, we propose an Encoder-Predictor-Decoder framework that decouples encoding and decoding, incorporating a latent-space world model. Our two-stage pretraining integrates a conditional diffusion decoder with image-level semantic priors and employs frozen-target distillation to jointly optimize pixel fidelity and semantic abstraction. Trained exclusively on unlabeled videos, our method significantly outperforms existing unsupervised approaches across multiple video understanding benchmarks. It establishes a scalable, general-purpose paradigm for video foundation models.
📝 Abstract
Large-scale video-text pretraining achieves strong performance but depends on noisy, synthetic captions with limited semantic coverage, often overlooking implicit world knowledge such as object motion, 3D geometry, and physical cues. In contrast, masked video modeling (MVM) directly exploits spatiotemporal structures but trails text-supervised methods on general tasks. We find this gap arises from overlooked architectural issues: pixel-level reconstruction struggles with convergence and its low-level requirement often conflicts with semantics, while latent prediction often encourages shortcut learning. To address these, we disentangle the traditional encoder-decoder design into an Encoder-Predictor-Decoder (EPD) framework, where the predictor acts as a latent world model, and propose InternVideo-Next, a two-stage pretraining scheme that builds a semantically consistent yet detail-preserving latent space for this world model. First, conventional linear decoder in pixel MVM enforces the predictor output latent to be linearly projected to, thus separable in pixel space, causing the conflict with semantic abstraction. Our Stage 1 proposes a conditional diffusion decoder and injects reliable image-level semantic priors to enhance semantics and convergence, thus bridging pixel-level fidelity with high-level semantic abstraction. Stage 2 further learns world knowledge by predicting frozen Stage 1 targets within this space, mitigating shortcut learning. Trained on public, unlabeled videos, InternVideo-Next achieves state-of-the-art results across benchmarks and provides a scalable path toward general video representation learning.