🤖 AI Summary
To address the insufficient robustness of camera-radar fusion semantic segmentation under adverse weather conditions, this paper proposes a diffusion-based cross-modal enhancement method. The approach tackles three key challenges: (1) leveraging radar point clouds as point prompts for Segment Anything Model (SAM) to generate high-fidelity pseudo-masks—marking the first such application; (2) introducing a noise suppression unit specifically designed to mitigate sparse and erroneous radar measurements; and (3) incorporating a zero-shot image inpainting mechanism to achieve weather-robust visual completion. By deeply integrating camera-captured texture details with radar’s all-weather perception capability, the method achieves superior segmentation accuracy and generalization under rain, fog, and other degradations. On the Waterscenes dataset, it improves mean Intersection-over-Union (mIoU) by 2.63% over a camera-only baseline and by 1.48% over existing fusion baselines, demonstrating significant gains in both performance and robustness.
📝 Abstract
Segmenting objects in an environment is a crucial task for autonomous driving and robotics, as it enables a better understanding of the surroundings of each agent. Although camera sensors provide rich visual details, they are vulnerable to adverse weather conditions. In contrast, radar sensors remain robust under such conditions, but often produce sparse and noisy data. Therefore, a promising approach is to fuse information from both sensors. In this work, we propose a novel framework to enhance camera-only baselines by integrating a diffusion model into a camera-radar fusion architecture. We leverage radar point features to create pseudo-masks using the Segment-Anything model, treating the projected radar points as point prompts. Additionally, we propose a noise reduction unit to denoise these pseudo-masks, which are further used to generate inpainted images that complete the missing information in the original images. Our method improves the camera-only segmentation baseline by 2.63% in mIoU and enhances our camera-radar fusion architecture by 1.48% in mIoU on the Waterscenes dataset. This demonstrates the effectiveness of our approach for semantic segmentation using camera-radar fusion under adverse weather conditions.