DEPTHOR++: Robust Depth Enhancement from a Real-World Lightweight dToF and RGB Guidance

πŸ“… 2025-09-30
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
To address severe degradation of depth map quality in real-world scenarios caused by noise, calibration errors, and outliers in lightweight direct time-of-flight (dToF) sensors, this paper proposes a robust depth enhancement framework. First, we establish a noise modeling and simulation-based training strategy tailored to realistic dToF characteristics. Second, we design a learning-free, differentiable outlier detection mechanism. Third, we integrate a pre-trained monocular depth prior into an RGB-guided, lightweight depth completion network. Our method eliminates reliance on ideal dToF inputs and precise dToF–RGB alignment. Evaluated on multiple real-world datasets, it achieves state-of-the-art performance: average RMSE and relative error (Rel) improve by 22% and 11%, respectively, while specular-region error decreases by 37%. Remarkably, depth accuracy from low-cost dToF sensors surpasses that of high-end devices in empirical measurements.

Technology Category

Application Category

πŸ“ Abstract
Depth enhancement, which converts raw dToF signals into dense depth maps using RGB guidance, is crucial for improving depth perception in high-precision tasks such as 3D reconstruction and SLAM. However, existing methods often assume ideal dToF inputs and perfect dToF-RGB alignment, overlooking calibration errors and anomalies, thus limiting real-world applicability. This work systematically analyzes the noise characteristics of real-world lightweight dToF sensors and proposes a practical and novel depth completion framework, DEPTHOR++, which enhances robustness to noisy dToF inputs from three key aspects. First, we introduce a simulation method based on synthetic datasets to generate realistic training samples for robust model training. Second, we propose a learnable-parameter-free anomaly detection mechanism to identify and remove erroneous dToF measurements, preventing misleading propagation during completion. Third, we design a depth completion network tailored to noisy dToF inputs, which integrates RGB images and pre-trained monocular depth estimation priors to improve depth recovery in challenging regions. On the ZJU-L5 dataset and real-world samples, our training strategy significantly boosts existing depth completion models, with our model achieving state-of-the-art performance, improving RMSE and Rel by 22% and 11% on average. On the Mirror3D-NYU dataset, by incorporating the anomaly detection method, our model improves upon the previous SOTA by 37% in mirror regions. On the Hammer dataset, using simulated low-cost dToF data from RealSense L515, our method surpasses the L515 measurements with an average gain of 22%, demonstrating its potential to enable low-cost sensors to outperform higher-end devices. Qualitative results across diverse real-world datasets further validate the effectiveness and generalizability of our approach.
Problem

Research questions and friction points this paper is trying to address.

Enhancing depth perception from noisy lightweight dToF sensors using RGB guidance
Addressing calibration errors and anomalies in dToF-RGB alignment for real-world use
Improving depth completion robustness in challenging regions like mirrors and low-cost devices
Innovation

Methods, ideas, or system contributions that make the work stand out.

Simulation method generates realistic training samples
Anomaly detection removes erroneous dToF measurements
Depth completion network integrates RGB and depth priors
πŸ”Ž Similar Papers
No similar papers found.
Jijun Xiang
Jijun Xiang
Huazhong University of Science and Technology
Computer Vision
Longliang Liu
Longliang Liu
Huazhong University of Science & Technology
optical flowstereo matchingdepth estimation
X
Xuan Zhu
School of Electronic Information and Communications, Huazhong University of Science and Technology, Wuhan 430074, China
Xianqi Wang
Xianqi Wang
Huazhong University of Science and Technology
Stereo Matching
Min Lin
Min Lin
Principal Research Scientist, Sea AI Lab
Artificial Intelligence
X
Xin Yang
School of Electronic Information and Communications, Huazhong University of Science and Technology, Wuhan 430074, China