L4DR: LiDAR-4DRadar Fusion for Weather-Robust 3D Object Detection

📅 2024-08-07
🏛️ arXiv.org
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
To address the degradation of 3D object detection performance under adverse weather due to LiDAR point cloud quality deterioration, this paper proposes a tightly integrated LiDAR–weather-robust 4D radar fusion framework. We pioneer early-stage fusion of these complementary modalities and design the IM² parallel backbone network to jointly extract cross-modal and intra-modal features. Furthermore, we introduce a multi-scale gated fusion (MSGF) module and a foreground-aware denoising (FAD) module to adaptively mitigate the asymmetric degradation of heterogeneous sensors in rain and fog. Our method achieves a +20.0% improvement in 3D mean Average Precision (mAP) on the VoD fog-simulated dataset and significantly outperforms LiDAR-only baselines on the real-world K-Radar adverse-weather benchmark. These results demonstrate superior robustness and generalization capability across diverse weather conditions.

Technology Category

Application Category

📝 Abstract
LiDAR-based vision systems are integral for 3D object detection, which is crucial for autonomous navigation. However, they suffer from performance degradation in adverse weather conditions due to the quality deterioration of LiDAR point clouds. Fusing LiDAR with the weather-robust 4D radar sensor is expected to solve this problem. However, the fusion of LiDAR and 4D radar is challenging because they differ significantly in terms of data quality and the degree of degradation in adverse weather. To address these issues, we introduce L4DR, a weather-robust 3D object detection method that effectively achieves LiDAR and 4D Radar fusion. Our L4DR includes Multi-Modal Encoding (MME) and Foreground-Aware Denoising (FAD) technique to reconcile sensor gaps, which is the first exploration of the complementarity of early fusion between LiDAR and 4D radar. Additionally, we design an Inter-Modal and Intra-Modal ({IM}2 ) parallel feature extraction backbone coupled with a Multi-Scale Gated Fusion (MSGF) module to counteract the varying degrees of sensor degradation under adverse weather conditions. Experimental evaluation on a VoD dataset with simulated fog proves that L4DR is more adaptable to changing weather conditions. It delivers a significant performance increase under different fog levels, improving the 3D mAP by up to 20.0% over the traditional LiDAR-only approach. Moreover, the results on the K-Radar dataset validate the consistent performance improvement of L4DR in real-world adverse weather conditions.
Problem

Research questions and friction points this paper is trying to address.

Enhances 3D object detection in adverse weather
Fuses LiDAR and 4D radar for robustness
Addresses sensor data quality and degradation issues
Innovation

Methods, ideas, or system contributions that make the work stand out.

LiDAR and 4D Radar fusion
Multi-Modal Encoding technique
Multi-Scale Gated Fusion module
🔎 Similar Papers
No similar papers found.
Xun Huang
Xun Huang
Unknown affiliation
Generative Models
Z
Ziyu Xu
Xiamen University
Hai Wu
Hai Wu
The University of Hong Kong
J
Jinlong Wang
Xiamen University
Q
Qiming Xia
Xiamen University
Y
Yan Xia
Technische Universität München
J
Jonathan Li
University of Waterloo
K
Kyle Gao
University of Waterloo
Chenglu Wen
Chenglu Wen
Professor of Xiamen University
3D visionpoint cloudsmobile mappingrobotics
C
Cheng Wang
Xiamen University