RobuRCDet: Enhancing Robustness of Radar-Camera Fusion in Bird's Eye View for 3D Object Detection

📅 2025-02-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the insufficient robustness of radar-camera fusion for 3D object detection under adverse weather and varying illumination conditions, this paper proposes a BEV-space interference-resilient multimodal detection framework. Our method introduces three key innovations: (1) a novel 3D Gaussian Expansion (3DGE) module that explicitly models the geometric structure and uncertainty distribution of radar point clouds; (2) an RCS- and velocity-guided deformable kernel mapping to enhance radar feature discriminability; and (3) a weather-adaptive fusion mechanism that dynamically weights BEV-level radar and image features based on camera confidence. Evaluated on the nuScenes benchmark, our approach achieves state-of-the-art robust performance across diverse challenging conditions—including rain, fog, and low-illumination scenarios—while maintaining strong accuracy under normal conditions. The framework significantly improves detection stability and generalization capability in complex real-world environments.

Technology Category

Application Category

📝 Abstract
While recent low-cost radar-camera approaches have shown promising results in multi-modal 3D object detection, both sensors face challenges from environmental and intrinsic disturbances. Poor lighting or adverse weather conditions degrade camera performance, while radar suffers from noise and positional ambiguity. Achieving robust radar-camera 3D object detection requires consistent performance across varying conditions, a topic that has not yet been fully explored. In this work, we first conduct a systematic analysis of robustness in radar-camera detection on five kinds of noises and propose RobuRCDet, a robust object detection model in BEV. Specifically, we design a 3D Gaussian Expansion (3DGE) module to mitigate inaccuracies in radar points, including position, Radar Cross-Section (RCS), and velocity. The 3DGE uses RCS and velocity priors to generate a deformable kernel map and variance for kernel size adjustment and value distribution. Additionally, we introduce a weather-adaptive fusion module, which adaptively fuses radar and camera features based on camera signal confidence. Extensive experiments on the popular benchmark, nuScenes, show that our model achieves competitive results in regular and noisy conditions.
Problem

Research questions and friction points this paper is trying to address.

Enhance radar-camera fusion robustness
Mitigate radar inaccuracies in 3D detection
Adapt fusion to adverse weather conditions
Innovation

Methods, ideas, or system contributions that make the work stand out.

3D Gaussian Expansion module
Weather-adaptive fusion module
Robust radar-camera 3D detection
🔎 Similar Papers
No similar papers found.