🤖 AI Summary
This work addresses the challenge of learning high-quality dense visual representations in video self-supervised learning that simultaneously capture spatial structure, semantic coherence, and temporal consistency. To this end, the authors propose a unified framework for joint image and video modeling, featuring three key innovations: a dense prediction mechanism that jointly leverages visible and masked tokens, deep self-supervised signals derived from intermediate layers, and a multimodal tokenizer combined with large-scale training strategies. The method achieves state-of-the-art performance on benchmarks such as Ego4D and EPIC-KITCHENS, significantly boosting downstream task performance—most notably improving robotic grasping success rates by 20 percentage points—and demonstrates strong results in navigation, depth estimation, and action recognition.
📝 Abstract
We present V-JEPA 2.1, a family of self-supervised models that learn dense, high-quality visual representations for both images and videos while retaining strong global scene understanding. The approach combines four key components. First, a dense predictive loss uses a masking-based objective in which both visible and masked tokens contribute to the training signal, encouraging explicit spatial and temporal grounding. Second, deep self-supervision applies the self-supervised objective hierarchically across multiple intermediate encoder layers to improve representation quality. Third, multi-modal tokenizers enable unified training across images and videos. Finally, the model benefits from effective scaling in both model capacity and training data. Together, these design choices produce representations that are spatially structured, semantically coherent, and temporally consistent.
Empirically, V-JEPA 2.1 achieves state-of-the-art performance on several challenging benchmarks, including 7.71 mAP on Ego4D for short-term object-interaction anticipation and 40.8 Recall@5 on EPIC-KITCHENS for high-level action anticipation, as well as a 20-point improvement in real-robot grasping success rate over V-JEPA-2 AC. The model also demonstrates strong performance in robotic navigation (5.687 ATE on TartanDrive), depth estimation (0.307 RMSE on NYUv2 with a linear probe), and global recognition (77.7 on Something-Something-V2). These results show that V-JEPA 2.1 significantly advances the state of the art in dense visual understanding and world modeling.