CORENet: Cross-Modal 4D Radar Denoising Network with LiDAR Supervision for Autonomous Driving

๐Ÿ“… 2025-08-18
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
4D radar point clouds suffer from extreme sparsity and high noise levels, severely degrading perception robustness for autonomous driving under adverse weather conditions. To address this, we propose a LiDAR-guided cross-modal denoising framework: during training, high-fidelity LiDAR point clouds serve as supervision signals, while at inference, the model operates end-to-end on radar-only inputโ€”enabling plug-and-play deployment. We design a cross-modal neural network that jointly models radar noise distributions and LiDAR geometric priors, thereby enhancing noise pattern discrimination and feature reconstruction. Evaluated on the high-noise Dual-Radar dataset, our method significantly outperforms state-of-the-art denoising and detection baselines, achieving an 8.2% mAP improvement in heavy rain and dense fog scenarios. Moreover, it seamlessly integrates with existing voxel-based 3D detection pipelines without architectural modification, establishing a new paradigm for robust radar perception.

Technology Category

Application Category

๐Ÿ“ Abstract
4D radar-based object detection has garnered great attention for its robustness in adverse weather conditions and capacity to deliver rich spatial information across diverse driving scenarios. Nevertheless, the sparse and noisy nature of 4D radar point clouds poses substantial challenges for effective perception. To address the limitation, we present CORENet, a novel cross-modal denoising framework that leverages LiDAR supervision to identify noise patterns and extract discriminative features from raw 4D radar data. Designed as a plug-and-play architecture, our solution enables seamless integration into voxel-based detection frameworks without modifying existing pipelines. Notably, the proposed method only utilizes LiDAR data for cross-modal supervision during training while maintaining full radar-only operation during inference. Extensive evaluation on the challenging Dual-Radar dataset, which is characterized by elevated noise level, demonstrates the effectiveness of our framework in enhancing detection robustness. Comprehensive experiments validate that CORENet achieves superior performance compared to existing mainstream approaches.
Problem

Research questions and friction points this paper is trying to address.

Denoise sparse 4D radar point clouds
Improve radar perception robustness
Enable radar-only inference with LiDAR supervision
Innovation

Methods, ideas, or system contributions that make the work stand out.

Cross-modal denoising with LiDAR supervision
Plug-and-play voxel integration architecture
Training supervision without inference dependency
๐Ÿ”Ž Similar Papers
No similar papers found.
Fuyang Liu
Fuyang Liu
University of Chinese Academy of Sciences
Deep Learning
Jilin Mei
Jilin Mei
Research Center for Intelligent Computing Systems, Institute of Computing Technology, University of Chinese Academy of Sciences
autonomous driving
Fangyuan Mao
Fangyuan Mao
Student at Institute of Computing Technology
C
Chen Min
Institute of Computing Technology, Chinese Academy of Sciences, China
Y
Yan Xing
Beijing Institute of Control Engineering, China
Y
Yu Hu
Institute of Computing Technology, Chinese Academy of Sciences, China