๐ค AI Summary
Depth completion (DC) methods suffer from poor generalization across datasets and under unknown sparse depth patterns, hindering real-world deployment. To address this, we propose a robust DC framework tailored for realistic multi-density sparse inputs. Our approach features: (i) a multi-resolution depth ensemble layer for scale-adaptive feature fusion; (ii) a probabilistic weighting loss that explicitly models depth uncertainty; and (iii) synthetic data mixing with scale normalization to enhance out-of-distribution robustness. Furthermore, we introduce Robust-DCโthe first zero-shot cross-domain evaluation protocol for DCโdesigned to rigorously assess generalization under domain shifts. Extensive experiments demonstrate that our method achieves significant improvements over state-of-the-art approaches on Robust-DC as well as standard benchmarks including NYUv2 and KITTI, with markedly enhanced generalization and robustness. All code, pretrained models, and evaluation tools are publicly released.
๐ Abstract
Depth completion (DC) aims to predict a dense depth map from an RGB image and sparse depth observations. Existing methods for DC generalize poorly on new datasets or unseen sparse depth patterns, limiting their practical applications. We propose OMNI-DC, a highly robust DC model that generalizes well across various scenarios. Our method incorporates a novel multi-resolution depth integration layer and a probability-based loss, enabling it to deal with sparse depth maps of varying densities. Moreover, we train OMNI-DC on a mixture of synthetic datasets with a scale normalization technique. To evaluate our model, we establish a new evaluation protocol named Robust-DC for zero-shot testing under various sparse depth patterns. Experimental results on Robust-DC and conventional benchmarks show that OMNI-DC significantly outperforms the previous state of the art. The checkpoints, training code, and evaluations are available at https://github.com/princeton-vl/OMNI-DC.