A Simple yet Effective Test-Time Adaptation for Zero-Shot Monocular Metric Depth Estimation

📅 2024-12-18
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
To address the challenge of calibrating affine-invariant disparities to metric depth in zero-shot monocular depth estimation, this paper proposes a fine-tuning-free test-time adaptive rescaling method. Leveraging sparse 3D points—e.g., from low-resolution LiDAR, SfM, or IMU reconstructions—as geometric priors, the method decouples the affine ambiguity in predicted depth maps and jointly corrects scale and shift in real time. Its key contribution is the first zero-parameter-modification approach to metric depth recovery: it preserves the pretrained model’s strong generalization capability while maintaining robustness against both sparse input noise and model prediction errors. Evaluated on NYUv2 and KITTI, the method outperforms existing zero-shot approaches, matches the performance of fully supervised fine-tuning, and significantly surpasses depth completion-based methods.

Technology Category

Application Category

📝 Abstract
The recent development of foundation models for monocular depth estimation such as Depth Anything paved the way to zero-shot monocular depth estimation. Since it returns an affine-invariant disparity map, the favored technique to recover the metric depth consists in fine-tuning the model. However, this stage is not straightforward, it can be costly and time-consuming because of the training and the creation of the dataset. The latter must contain images captured by the camera that will be used at test time and the corresponding ground truth. Moreover, the fine-tuning may also degrade the generalizing capacity of the original model. Instead, we propose in this paper a new method to rescale Depth Anything predictions using 3D points provided by sensors or techniques such as low-resolution LiDAR or structure-from-motion with poses given by an IMU. This approach avoids fine-tuning and preserves the generalizing power of the original depth estimation model while being robust to the noise of the sparse depth or of the depth model. Our experiments highlight enhancements relative to zero-shot monocular metric depth estimation methods, competitive results compared to fine-tuned approaches and a better robustness than depth completion approaches. Code available at https://gitlab.ensta.fr/ssh/monocular-depth-rescaling.
Problem

Research questions and friction points this paper is trying to address.

Eliminates need for costly fine-tuning in depth estimation.
Uses 3D points to rescale Depth Anything predictions.
Improves robustness and accuracy in zero-shot depth estimation.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Rescales Depth Anything predictions using 3D points
Avoids fine-tuning, preserves model generalization
Uses sensors like LiDAR or structure-from-motion
🔎 Similar Papers
No similar papers found.