🤖 AI Summary
To address feature misalignment, inefficient depth utilization, and irregular segmentation patches in RGB-D semantic segmentation, this paper proposes a texture-guided late-fusion framework. First, depth maps are encoded into surface normal maps to enhance 3D geometric representation and projection matrices are optimized to mitigate positional ambiguity. Second, a texture-feature-guided geometric feature injection mechanism is introduced. Third, a semantic-spatial joint-weighted graph is constructed, integrating KL-divergence-based hard pixel mining and graph convolutional networks (GCNs) for patch regularization. Key contributions include: (i) the first incorporation of normal map encoding into RGB-D segmentation preprocessing; and (ii) a novel semantic-geometric co-modeling paradigm with GNN-driven late fusion. Extensive experiments on NYU-DepthV2 and SUN RGB-D demonstrate significant mIoU improvements, effective suppression of irregular patches, and substantial mitigation of feature misalignment and positional ambiguity.
📝 Abstract
Most existing RGB-D semantic segmentation methods focus on the feature level fusion, including complex cross-modality and cross-scale fusion modules. However, these methods may cause misalignment problem in the feature fusion process and counter-intuitive patches in the segmentation results. Inspired by the popular pixel-node-pixel pipeline, we propose to 1) fuse features from two modalities in a late fusion style, during which the geometric feature injection is guided by texture feature prior; 2) employ Graph Neural Networks (GNNs) on the fused feature to alleviate the emergence of irregular patches by inferring patch relationship. At the 3D feature extraction stage, we argue that traditional CNNs are not efficient enough for depth maps. So, we encode depth map into normal map, after which CNNs can easily extract object surface tendencies.At projection matrix generation stage, we find the existence of Biased-Assignment and Ambiguous-Locality issues in the original pipeline. Therefore, we propose to 1) adopt the Kullback-Leibler Loss to ensure no missing important pixel features, which can be viewed as hard pixel mining process; 2) connect regions that are close to each other in the Euclidean space as well as in the semantic space with larger edge weights so that location informations can been considered. Extensive experiments on two public datasets, NYU-DepthV2 and SUN RGB-D, have shown that our approach can consistently boost the performance of RGB-D semantic segmentation task.