DepthDark: Robust Monocular Depth Estimation for Low-Light Environments

📅 2025-07-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Monocular depth estimation foundation models suffer significant performance degradation under low-light conditions, primarily due to the absence of robust dark-light-specific architectures, large-scale high-quality paired depth datasets, and efficient fine-tuning paradigms. To address these challenges, we propose DepthDark: (1) We introduce the first high-fidelity nighttime paired depth dataset, uniquely integrating physics-driven glare and noise simulation modules to enhance realism and diversity. (2) We design a lighting-guided multi-scale feature fusion fine-tuning strategy, combined with parameter-efficient fine-tuning (PEFT) to improve generalization under illumination variation. Evaluated on nuScenes-Night and RobotCar-Night, DepthDark achieves state-of-the-art performance—demonstrating strong robustness with limited training data and computational resources. Our work establishes a novel paradigm for advancing foundation models in low-light monocular depth estimation.

Technology Category

Application Category

📝 Abstract
In recent years, foundation models for monocular depth estimation have received increasing attention. Current methods mainly address typical daylight conditions, but their effectiveness notably decreases in low-light environments. There is a lack of robust foundational models for monocular depth estimation specifically designed for low-light scenarios. This largely stems from the absence of large-scale, high-quality paired depth datasets for low-light conditions and the effective parameter-efficient fine-tuning (PEFT) strategy. To address these challenges, we propose DepthDark, a robust foundation model for low-light monocular depth estimation. We first introduce a flare-simulation module and a noise-simulation module to accurately simulate the imaging process under nighttime conditions, producing high-quality paired depth datasets for low-light conditions. Additionally, we present an effective low-light PEFT strategy that utilizes illumination guidance and multiscale feature fusion to enhance the model's capability in low-light environments. Our method achieves state-of-the-art depth estimation performance on the challenging nuScenes-Night and RobotCar-Night datasets, validating its effectiveness using limited training data and computing resources.
Problem

Research questions and friction points this paper is trying to address.

Lack robust monocular depth models for low-light conditions
Missing high-quality paired depth datasets for nighttime scenarios
Need effective parameter-efficient fine-tuning for low-light adaptation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Flare-simulation module for nighttime imaging
Noise-simulation module for accurate datasets
PEFT strategy with illumination guidance
🔎 Similar Papers
No similar papers found.
L
Longjian Zeng
Hangzhou Dianzi University
Z
Zunjie Zhu
Hangzhou Dianzi University
R
Rongfeng Lu
Hangzhou Dianzi University
M
Ming Lu
Intel Labs China
Bolun Zheng
Bolun Zheng
Hangzhou Dianzi Universiy
multimediacomputer vision
Chenggang Yan
Chenggang Yan
Hangzhou Dianzi University
A
Anke Xue
Hangzhou Dianzi University