🤖 AI Summary
This paper addresses key challenges in monocular depth estimation—dependency on camera intrinsics, absence of absolute scale, blurry high-frequency details, and low inference efficiency—by proposing a zero-shot, calibration-free, high-resolution metric depth estimation method. Methodologically, it introduces: (1) an efficient multi-scale vision transformer architecture tailored for dense prediction; (2) a joint real-and-synthetic data training paradigm with a boundary-aware loss function; and (3) an end-to-end single-image focal-length regression module enabling absolute-scale recovery without prior intrinsic parameters. The approach achieves state-of-the-art performance on NYUv2 and KITTI, with markedly improved depth map boundary sharpness. At 2.25 megapixels input resolution, inference takes only 0.3 seconds on a standard GPU. Code and pretrained models are publicly released.
📝 Abstract
We present a foundation model for zero-shot metric monocular depth estimation. Our model, Depth Pro, synthesizes high-resolution depth maps with unparalleled sharpness and high-frequency details. The predictions are metric, with absolute scale, without relying on the availability of metadata such as camera intrinsics. And the model is fast, producing a 2.25-megapixel depth map in 0.3 seconds on a standard GPU. These characteristics are enabled by a number of technical contributions, including an efficient multi-scale vision transformer for dense prediction, a training protocol that combines real and synthetic datasets to achieve high metric accuracy alongside fine boundary tracing, dedicated evaluation metrics for boundary accuracy in estimated depth maps, and state-of-the-art focal length estimation from a single image. Extensive experiments analyze specific design choices and demonstrate that Depth Pro outperforms prior work along multiple dimensions. We release code and weights at https://github.com/apple/ml-depth-pro