🤖 AI Summary
To address the robustness degradation of RGB-D semantic segmentation caused by noisy depth maps, this paper proposes the first generative RGB-D segmentation framework based on diffusion models. Methodologically: (1) we design a depth-specific deformable attention mechanism that explicitly models invalid depth regions and geometric uncertainty; (2) we construct a multimodal feature fusion architecture that jointly optimizes RGB features and diffusion-driven depth representations. Our key contributions are threefold: (i) the first integration of diffusion probabilistic models into RGB-D segmentation, effectively unifying generative priors with discriminative capability; (ii) state-of-the-art performance on NYUv2 and SUN-RGBD benchmarks, with particularly significant gains on challenging subsets containing abundant missing or noisy depth data; and (iii) substantially improved training efficiency compared to mainstream discriminative approaches.
📝 Abstract
Vision-based perception and reasoning is essential for scene understanding in any autonomous system. RGB and depth images are commonly used to capture both the semantic and geometric features of the environment. Developing methods to reliably interpret this data is critical for real-world applications, where noisy measurements are often unavoidable. In this work, we introduce a diffusion-based framework to address the RGB-D semantic segmentation problem. Additionally, we demonstrate that utilizing a Deformable Attention Transformer as the encoder to extract features from depth images effectively captures the characteristics of invalid regions in depth measurements. Our generative framework shows a greater capacity to model the underlying distribution of RGB-D images, achieving robust performance in challenging scenarios with significantly less training time compared to discriminative methods. Experimental results indicate that our approach achieves State-of-the-Art performance on both the NYUv2 and SUN-RGBD datasets in general and especially in the most challenging of their image data. Our project page will be available at https://diffusionmms.github.io/