🤖 AI Summary
This paper addresses the challenge of unifying diverse 3D vision tasks—including uncalibrated Structure-from-Motion (SfM), multi-view stereo (MVS), monocular depth estimation, camera localization, and depth completion—under a single end-to-end framework. The proposed method introduces a unified Transformer architecture that jointly regresses metric-scale 3D scene geometry and camera parameters directly from single/multi-view images and optional geometric priors (e.g., intrinsics, poses, depth maps, or partial reconstructions). Its core contributions are: (1) a factorized multi-view geometry representation that explicitly unifies depth maps, local ray maps, camera poses, and scale factors, enabling principled upgrade from local reconstructions to globally consistent metric space; and (2) standardized multi-task supervision with flexible input augmentation, facilitating efficient end-to-end joint training. Experiments demonstrate state-of-the-art or competitive performance across all targeted tasks, significantly improving both generalizability and training efficiency compared to task-specific models.
📝 Abstract
We introduce MapAnything, a unified transformer-based feed-forward model that ingests one or more images along with optional geometric inputs such as camera intrinsics, poses, depth, or partial reconstructions, and then directly regresses the metric 3D scene geometry and cameras. MapAnything leverages a factored representation of multi-view scene geometry, i.e., a collection of depth maps, local ray maps, camera poses, and a metric scale factor that effectively upgrades local reconstructions into a globally consistent metric frame. Standardizing the supervision and training across diverse datasets, along with flexible input augmentation, enables MapAnything to address a broad range of 3D vision tasks in a single feed-forward pass, including uncalibrated structure-from-motion, calibrated multi-view stereo, monocular depth estimation, camera localization, depth completion, and more. We provide extensive experimental analyses and model ablations demonstrating that MapAnything outperforms or matches specialist feed-forward models while offering more efficient joint training behavior, thus paving the way toward a universal 3D reconstruction backbone.