RadarXFormer: Robust Object Detection via Cross-Dimension Fusion of 4D Radar Spectra and Images for Autonomous Driving

📅 2026-03-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work proposes a 3D object detection framework that fuses raw 4D millimeter-wave radar spectrograms with RGB images to address the performance degradation of camera and LiDAR perception under adverse weather and lighting conditions. The method innovatively leverages raw radar spectrograms directly to construct a compact representation that preserves complete 3D geometric information. A cross-modal Transformer mechanism is designed to integrate multi-scale 3D spherical features from radar with 2D image features, enhancing spatial consistency while reducing data redundancy. Evaluated on the K-Radar dataset, the approach significantly improves detection accuracy and robustness in complex environments without compromising real-time inference capability.

Technology Category

Application Category

📝 Abstract
Reliable perception is essential for autonomous driving systems to operate safely under diverse real-world traffic conditions. However, camera- and LiDAR-based perception systems suffer from performance degradation under adverse weather and lighting conditions, limiting their robustness and large-scale deployment in intelligent transportation systems. Radar-vision fusion provides a promising alternative by combining the environmental robustness and cost efficiency of millimeter-wave (mmWave) radar with the rich semantic information captured by cameras. Nevertheless, conventional 3D radar measurements lack height resolution and remain highly sparse, while emerging 4D mmWave radar introduces elevation information but also brings challenges such as signal noise and large data volume. To address these issues, this paper proposes RadarXFormer, a 3D object detection framework that enables efficient cross-modal fusion between 4D radar spectra and RGB images. Instead of relying on sparse radar point clouds, RadarXFormer directly leverages raw radar spectra and constructs an efficient 3D representation that reduces data volume while preserving complete 3D spatial information. The "X" highlights the proposed cross-dimension (3D-2D) fusion mechanism, in which multi-scale 3D spherical radar feature cubes are fused with complementary 2D image feature maps. Experiments on the K-Radar dataset demonstrate improved detection accuracy and robustness under challenging conditions while maintaining real-time inference capability.
Problem

Research questions and friction points this paper is trying to address.

autonomous driving
4D radar
object detection
radar-vision fusion
robust perception
Innovation

Methods, ideas, or system contributions that make the work stand out.

4D mmWave radar
cross-dimension fusion
radar-vision fusion
3D object detection
raw radar spectra
🔎 Similar Papers
No similar papers found.
Y
Yue Sun
Global Institute of Future Technology, Shanghai Jiao Tong University, Shanghai, 200240, China
Yeqiang Qian
Yeqiang Qian
Shanghai Jiao Tong University
intelligent vehiclecomputer vision
Z
Zhe Wang
SAIC GM Wuling Automobile Company Co., Ltd., Liuzhou, 545007, China; Guangxi Laboratory of New Energy Automobile, Liuzhou, 545007, China
T
Tianhui Li
SAIC GM Wuling Automobile Company Co., Ltd., Liuzhou, 545007, China; Guangxi Laboratory of New Energy Automobile, Liuzhou, 545007, China
C
Chunxiang Wang
School of Automation and Intelligent Sensing, Shanghai Jiao Tong University, Shanghai, 200240, China; Key Laboratory of System Control and Information Processing, Ministry of Education of China, Shanghai, 200240, China
Ming Yang
Ming Yang
State Key Laboratory of Inorganic Synthesis and Preparative Chemistry, Jilin University
self-assemblynanocompositesnanostructuresnanointerfaces