Depth Anything at Any Condition

📅 2025-07-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing monocular depth estimation (MDE) models exhibit limited generalization under complex open-world conditions—such as abrupt illumination changes, adverse weather, and sensor distortions. To address this, we propose an unsupervised consistency-regularized fine-tuning framework that enhances robustness using only a small set of unlabeled data. Our method introduces two key components: (i) cross-domain consistency constraints to improve invariance to degradation conditions, and (ii) spatial distance constraints that explicitly model pixel-wise relative geometric relationships, thereby significantly improving semantic boundary sharpness and fine-grained depth accuracy. Crucially, the approach requires no ground-truth depth annotations and enables zero-shot domain adaptation. It achieves state-of-the-art performance on both synthetic and real-world degraded scenarios—including rain, fog, low illumination, and lens distortion—and consistently outperforms prior methods across multiple standard benchmarks (KITTI, NYUv2) and their corresponding degradation subsets.

Technology Category

Application Category

📝 Abstract
We present Depth Anything at Any Condition (DepthAnything-AC), a foundation monocular depth estimation (MDE) model capable of handling diverse environmental conditions. Previous foundation MDE models achieve impressive performance across general scenes but not perform well in complex open-world environments that involve challenging conditions, such as illumination variations, adverse weather, and sensor-induced distortions. To overcome the challenges of data scarcity and the inability of generating high-quality pseudo-labels from corrupted images, we propose an unsupervised consistency regularization finetuning paradigm that requires only a relatively small amount of unlabeled data. Furthermore, we propose the Spatial Distance Constraint to explicitly enforce the model to learn patch-level relative relationships, resulting in clearer semantic boundaries and more accurate details. Experimental results demonstrate the zero-shot capabilities of DepthAnything-AC across diverse benchmarks, including real-world adverse weather benchmarks, synthetic corruption benchmarks, and general benchmarks. Project Page: https://ghost233lism.github.io/depthanything-AC-page Code: https://github.com/HVision-NKU/DepthAnythingAC
Problem

Research questions and friction points this paper is trying to address.

Handles diverse environmental conditions in depth estimation
Improves performance in complex open-world environments
Addresses data scarcity and pseudo-label quality issues
Innovation

Methods, ideas, or system contributions that make the work stand out.

Unsupervised consistency regularization finetuning paradigm
Spatial Distance Constraint for patch-level relationships
Handles diverse environmental conditions effectively
🔎 Similar Papers
No similar papers found.