🤖 AI Summary
Existing 3D vision pretraining models predominantly process single frames and lack explicit multi-view consistency modeling, resulting in poor correspondence accuracy and suboptimal downstream performance. To address this, we propose Muskie—the first native multi-view visual backbone network. Muskie introduces multi-view masked image modeling (MV-MIM) as a self-supervised pretraining objective, featuring aggressive cross-view masking and geometry-constrained reconstruction to implicitly learn viewpoint-invariant representations and robust geometric understanding—without any 3D supervision. Our method jointly integrates multi-view contrastive learning, geometric correspondence matching, and cross-view feature aggregation, all optimized via reconstruction loss. Experiments demonstrate that Muskie significantly outperforms state-of-the-art methods (e.g., DINO) in multi-view point correspondence accuracy and substantially improves downstream tasks including camera pose estimation and point cloud reconstruction.
📝 Abstract
We present Muskie, a native multi-view vision backbone designed for 3D vision tasks. Unlike existing models, which are frame-wise and exhibit limited multi-view consistency, Muskie is designed to process multiple views simultaneously and introduce multi-view consistency in pre-training stage. Muskie is trained to reconstruct heavily masked content in one view by finding and utilizing geometric correspondences from other views. Through this pretext task and our proposed aggressive masking strategy, the model implicitly to learn view-invariant features and develop strong geometric understanding without any 3D supervision. Compared with state-of-the-art frame-wise backbones such as DINO, Muskie achieves higher multi-view correspondence accuracy. Furthermore, we demonstrate that using Muskie as a backbone consistently enhances performance on downstream 3D tasks, including camera pose estimation and pointmap reconstruction. Codes are publicly available at https://leo-frank.github.io/Muskie/