Rethinking JEPA: Compute-Efficient Video SSL with Frozen Teachers

📅 2025-09-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Video self-supervised learning methods such as V-JEPA rely on exponential moving average (EMA) to update the teacher model within a teacher-student architecture, leading to tight coupling between teacher and student, complex model selection, and opaque training dynamics. Method: We propose SALT—a decoupled video self-supervised learning framework—that employs a frozen static teacher encoder to generate target representations, fully eliminating teacher-student architectural coupling; it further introduces a two-stage training paradigm that separates pixel-level reconstruction pretraining from masked latent-space prediction, thereby removing EMA dependence entirely. Contribution/Results: SALT features an asymmetric architecture and high computational efficiency, achieving significantly superior accuracy over V-JEPA 2 at equivalent FLOPs—attaining a better accuracy-computation Pareto frontier. It exhibits robustness to teacher quality, enables flexible resource allocation, and supports scalable deployment.

Technology Category

Application Category

📝 Abstract
Video Joint Embedding Predictive Architectures (V-JEPA) learn generalizable off-the-shelf video representation by predicting masked regions in latent space with an exponential moving average (EMA)-updated teacher. While EMA prevents representation collapse, it complicates scalable model selection and couples teacher and student architectures. We revisit masked-latent prediction and show that a frozen teacher suffices. Concretely, we (i) train a target encoder with a simple pixel-reconstruction objective under V-JEPA masking, then (ii) freeze it and train a student to predict the teacher's latents on masked regions. This leads to a two-stage, unregularized scheme that we refer to as SALT (Static-teacher Asymmetric Latent Training). SALT decouples optimization into pixel reconstruction (teacher) and masked latent prediction (student), increasing transparency, efficiency, and scalability while preserving the ability of representation to generalize under frozen evaluation. Empirically, our student models outperform recently proposed V-JEPA 2 encoders under frozen backbone evaluation across diverse benchmarks. They are also more compute-optimal: at matched pretraining FLOPs, our method achieves higher probing accuracy, and its scaling curves dominate V-JEPA's accuracy-FLOPs Pareto frontier. Finally, we find that student quality is remarkably robust to teacher quality: high-performing students emerge even with small, sub-optimal teachers. This points to a compute budget allocation that should overwhelmingly favor the student. These results position SALT as a simple, scalable, and compute-efficient alternative to EMA-based self-distillation for video representation learning.
Problem

Research questions and friction points this paper is trying to address.

Replacing EMA teachers with frozen teachers for video representation learning
Decoupling teacher-student optimization to improve training efficiency
Developing compute-efficient SSL method that outperforms existing V-JEPA models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses frozen teacher for latent prediction
Decouples pixel reconstruction and latent prediction
Enables compute-efficient scaling with static teacher
🔎 Similar Papers
No similar papers found.