🤖 AI Summary
This work proposes a 3D object detection framework that fuses raw 4D millimeter-wave radar spectrograms with RGB images to address the performance degradation of camera and LiDAR perception under adverse weather and lighting conditions. The method innovatively leverages raw radar spectrograms directly to construct a compact representation that preserves complete 3D geometric information. A cross-modal Transformer mechanism is designed to integrate multi-scale 3D spherical features from radar with 2D image features, enhancing spatial consistency while reducing data redundancy. Evaluated on the K-Radar dataset, the approach significantly improves detection accuracy and robustness in complex environments without compromising real-time inference capability.
📝 Abstract
Reliable perception is essential for autonomous driving systems to operate safely under diverse real-world traffic conditions. However, camera- and LiDAR-based perception systems suffer from performance degradation under adverse weather and lighting conditions, limiting their robustness and large-scale deployment in intelligent transportation systems. Radar-vision fusion provides a promising alternative by combining the environmental robustness and cost efficiency of millimeter-wave (mmWave) radar with the rich semantic information captured by cameras. Nevertheless, conventional 3D radar measurements lack height resolution and remain highly sparse, while emerging 4D mmWave radar introduces elevation information but also brings challenges such as signal noise and large data volume. To address these issues, this paper proposes RadarXFormer, a 3D object detection framework that enables efficient cross-modal fusion between 4D radar spectra and RGB images. Instead of relying on sparse radar point clouds, RadarXFormer directly leverages raw radar spectra and constructs an efficient 3D representation that reduces data volume while preserving complete 3D spatial information. The "X" highlights the proposed cross-dimension (3D-2D) fusion mechanism, in which multi-scale 3D spherical radar feature cubes are fused with complementary 2D image feature maps. Experiments on the K-Radar dataset demonstrate improved detection accuracy and robustness under challenging conditions while maintaining real-time inference capability.