🤖 AI Summary
To address the limitations in RGB-D fusion caused by sparse, irregular, and ambiguous depth maps, this paper proposes a degradation-aware depth enhancement paradigm that reformulates depth completion as a selective high-frequency compensation task. Methodologically, we first generate an initial coarse depth map via non-CNN-based sparse-to-dense interpolation. Second, we introduce a self-supervised degradation modeling module that implicitly learns RGB-guided, edge-adaptive degradation patterns. Third, we design a multimodal conditional Mamba architecture that dynamically generates state parameters to capture global high-frequency interactions. To the best of our knowledge, this is the first work to integrate explicit degradation modeling with the Mamba architecture for depth enhancement. Our approach achieves state-of-the-art performance on four benchmark datasets—NYUv2, DIML, SUN RGB-D, and TOFDC—demonstrating significant improvements in depth completion accuracy and structural fidelity.
📝 Abstract
In this paper, we introduce the Selective Image Guided Network (SigNet), a novel degradation-aware framework that transforms depth completion into depth enhancement for the first time. Moving beyond direct completion using convolutional neural networks (CNNs), SigNet initially densifies sparse depth data through non-CNN densification tools to obtain coarse yet dense depth. This approach eliminates the mismatch and ambiguity caused by direct convolution over irregularly sampled sparse data. Subsequently, SigNet redefines completion as enhancement, establishing a self-supervised degradation bridge between the coarse depth and the targeted dense depth for effective RGB-D fusion. To achieve this, SigNet leverages the implicit degradation to adaptively select high-frequency components (e.g., edges) of RGB data to compensate for the coarse depth. This degradation is further integrated into a multi-modal conditional Mamba, dynamically generating the state parameters to enable efficient global high-frequency information interaction. We conduct extensive experiments on the NYUv2, DIML, SUN RGBD, and TOFDC datasets, demonstrating the state-of-the-art (SOTA) performance of SigNet.