🤖 AI Summary
Under adverse weather conditions (e.g., heavy rain, low illumination), RGB-based depth completion suffers severe performance degradation due to modality failure and absence of ground-truth depth. To address this, this work pioneers a systematic investigation of thermal imaging combined with sparse LiDAR for all-weather depth completion. We propose the Contrastive and Pseudo-Supervised (COPS) framework: it leverages monocular depth foundation models to generate high-fidelity pseudo-labels—mitigating ground-truth scarcity—and integrates thermal–LiDAR cross-modal feature alignment with boundary-aware contrastive learning to enhance depth boundary sharpness and completion robustness. Evaluated on the cross-scene MS² and ViViD benchmarks, COPS reduces average error by 23.6% under rainy and low-light conditions. Results demonstrate the critical role of thermal sensing in enabling reliable all-weather depth perception and validate the efficacy of our approach.
📝 Abstract
Depth completion, which estimates dense depth from sparse LiDAR and RGB images, has demonstrated outstanding performance in well-lit conditions. However, due to the limitations of RGB sensors, existing methods often struggle to achieve reliable performance in harsh environments, such as heavy rain and low-light conditions. Furthermore, we observe that ground truth depth maps often suffer from large missing measurements in adverse weather conditions such as heavy rain, leading to insufficient supervision. In contrast, thermal cameras are known for providing clear and reliable visibility in such conditions, yet research on thermal-LiDAR depth completion remains underexplored. Moreover, the characteristics of thermal images, such as blurriness, low contrast, and noise, bring unclear depth boundary problems. To address these challenges, we first evaluate the feasibility and robustness of thermal-LiDAR depth completion across diverse lighting (eg., well-lit, low-light), weather (eg., clear-sky, rainy), and environment (eg., indoor, outdoor) conditions, by conducting extensive benchmarks on the MS$^2$ and ViViD datasets. In addition, we propose a framework that utilizes COntrastive learning and Pseudo-Supervision (COPS) to enhance depth boundary clarity and improve completion accuracy by leveraging a depth foundation model in two key ways. First, COPS enforces a depth-aware contrastive loss between different depth points by mining positive and negative samples using a monocular depth foundation model to sharpen depth boundaries. Second, it mitigates the issue of incomplete supervision from ground truth depth maps by leveraging foundation model predictions as dense depth priors. We also provide in-depth analyses of the key challenges in thermal-LiDAR depth completion to aid in understanding the task and encourage future research.