Muskie: Multi-view Masked Image Modeling for 3D Vision Pre-training

📅 2025-11-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing 3D vision pretraining models predominantly process single frames and lack explicit multi-view consistency modeling, resulting in poor correspondence accuracy and suboptimal downstream performance. To address this, we propose Muskie—the first native multi-view visual backbone network. Muskie introduces multi-view masked image modeling (MV-MIM) as a self-supervised pretraining objective, featuring aggressive cross-view masking and geometry-constrained reconstruction to implicitly learn viewpoint-invariant representations and robust geometric understanding—without any 3D supervision. Our method jointly integrates multi-view contrastive learning, geometric correspondence matching, and cross-view feature aggregation, all optimized via reconstruction loss. Experiments demonstrate that Muskie significantly outperforms state-of-the-art methods (e.g., DINO) in multi-view point correspondence accuracy and substantially improves downstream tasks including camera pose estimation and point cloud reconstruction.

Technology Category

Application Category

📝 Abstract
We present Muskie, a native multi-view vision backbone designed for 3D vision tasks. Unlike existing models, which are frame-wise and exhibit limited multi-view consistency, Muskie is designed to process multiple views simultaneously and introduce multi-view consistency in pre-training stage. Muskie is trained to reconstruct heavily masked content in one view by finding and utilizing geometric correspondences from other views. Through this pretext task and our proposed aggressive masking strategy, the model implicitly to learn view-invariant features and develop strong geometric understanding without any 3D supervision. Compared with state-of-the-art frame-wise backbones such as DINO, Muskie achieves higher multi-view correspondence accuracy. Furthermore, we demonstrate that using Muskie as a backbone consistently enhances performance on downstream 3D tasks, including camera pose estimation and pointmap reconstruction. Codes are publicly available at https://leo-frank.github.io/Muskie/
Problem

Research questions and friction points this paper is trying to address.

Develops multi-view vision backbone for 3D tasks without 3D supervision
Learns view-invariant features through masked content reconstruction
Improves multi-view correspondence accuracy and downstream 3D performance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multi-view masked image modeling for 3D vision
Reconstruct masked views using geometric correspondences
Learn view-invariant features without 3D supervision
🔎 Similar Papers
No similar papers found.