🤖 AI Summary
To address degraded semantic segmentation performance in autonomous driving under challenging conditions such as occlusion, this paper proposes the first multi-modal semantic segmentation framework integrating light-field images and LiDAR point clouds. To overcome key challenges—including large inter-modal density disparities, limited viewpoint diversity, and difficulty in occlusion reasoning—we introduce the first paired multi-modal dataset and design two novel modules: a feature completion module that differentiably reconstructs point cloud feature maps, and a depth-aware attention module that jointly enhances cross-modal representations of images and point clouds. Experiments demonstrate substantial improvements in segmentation robustness under occlusion: our method achieves +1.71 mIoU over image-only baselines and +2.38 mIoU over point-cloud-only baselines, validating the effective integration of complementary cues from light-field and LiDAR modalities.
📝 Abstract
Semantic segmentation serves as a cornerstone of scene understanding in autonomous driving but continues to face significant challenges under complex conditions such as occlusion. Light field and LiDAR modalities provide complementary visual and spatial cues that are beneficial for robust perception; how- ever, their effective integration is hindered by limited viewpoint diversity and inherent modality discrepancies. To address these challenges, the first multimodal semantic segmentation dataset integrating light field data and point cloud data is proposed. Based on this dataset, we proposed a multi-modal light field point-cloud fusion segmentation network(Mlpfseg), incorporating feature completion and depth perception to segment both camera images and LiDAR point clouds simultaneously. The feature completion module addresses the density mismatch between point clouds and image pixels by performing differential re- construction of point-cloud feature maps, enhancing the fusion of these modalities. The depth perception module improves the segmentation of occluded objects by reinforcing attention scores for better occlusion awareness. Our method outperforms image- only segmentation by 1.71 Mean Intersection over Union(mIoU) and point cloud-only segmentation by 2.38 mIoU, demonstrating its effectiveness.