🤖 AI Summary
To address the weak geometric representation learning in self-supervised learning compared to semantic representation, this paper proposes the Unified Multi-View Masked Modeling (UMVM) framework—the first to extend Masked Autoencoding (MAE) to an arbitrary number of unlabeled multi-view images, explicitly modeling 3D geometry. UMVM employs a lightweight decoder and cross-frame attention to enhance geometric consistency across views while preserving model simplicity and efficiency. Compared to state-of-the-art methods such as CroCo, UMVM achieves superior scalability and computational efficiency. It sets new state-of-the-art performance on three core 3D vision tasks: feedforward reconstruction, dense image matching, and relative pose estimation—outperforming both DINOv3 and CroCo v2. These results empirically validate UMVM’s capability to learn geometrically aware representations effectively.
📝 Abstract
Self-supervised learning on images seeks to extract meaningful visual representations from unlabeled data. When scaled to large datasets, this paradigm has achieved state-of-the-art performance and the resulting trained models such as DINOv3 have seen widespread adoption. However, most prior efforts are optimized for semantic understanding rather than geometric reasoning. One important exception is Cross-View Completion, CroCo, which is a form of masked autoencoding (MAE) tailored for 3D understanding. In this work, we continue on the path proposed by CroCo and focus on learning features tailored for 3D vision. In a nutshell, we extend MAE to arbitrarily many views of the same scene. By uniformly masking all views and employing a lightweight decoder with inter-frame attention, our approach is inherently simpler and more scalable than CroCo. We evaluate the resulting model, MuM, extensively on downstream tasks including feedforward reconstruction, dense image matching and relative pose estimation, finding that it outperforms the state-of-the-art visual encoders DINOv3 and CroCo v2.