LRC-WeatherNet: LiDAR, RADAR, and Camera Fusion Network for Real-time Weather-type Classification in Autonomous Driving

📅 2026-03-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the significant degradation of autonomous driving perception systems under adverse weather conditions—such as rain, fog, and snow—where single-sensor modalities struggle to reliably recognize complex environmental states. To this end, the paper proposes LRC-WeatherNet, the first real-time weather classification framework that fuses LiDAR, RADAR, and camera data in a tri-modal setting. The approach integrates early fusion via a unified bird’s-eye-view representation with mid-level feature fusion governed by a gating mechanism, dynamically adapting to the varying reliability of each sensor across different weather conditions. Experiments on the MSU-4S dataset demonstrate that LRC-WeatherNet consistently outperforms single-modality baselines across all nine weather classes, achieving both high classification accuracy and computational efficiency.

Technology Category

Application Category

📝 Abstract
Autonomous vehicles face major perception and navigation challenges in adverse weather such as rain, fog, and snow, which degrade the performance of LiDAR, RADAR, and RGB camera sensors. While each sensor type offers unique strengths, such as RADAR robustness in poor visibility and LiDAR precision in clear conditions, they also suffer distinct limitations when exposed to environmental obstructions. This study proposes LRC-WeatherNet, a novel multi-sensor fusion framework that integrates LiDAR, RADAR, and camera data for real-time classification of weather conditions. By employing both early fusion using a unified Bird's Eye View representation and mid-level gated fusion of modality-specific feature maps, our approach adapts to the varying reliability of each sensor under changing weather. Evaluated on the extensive MSU-4S dataset covering nine weather types, LRC-WeatherNet achieves superior classification performance and computational efficiency, significantly outperforming unimodal baselines in adverse conditions. This work is the first to combine all three modalities for robust, real-time weather classification in autonomous driving. We release our trained models and source code in https://github.com/nouralhudaalbashir/LRC-WeatherNet.
Problem

Research questions and friction points this paper is trying to address.

autonomous driving
adverse weather
sensor fusion
weather classification
perception robustness
Innovation

Methods, ideas, or system contributions that make the work stand out.

multi-sensor fusion
real-time weather classification
LiDAR-RADAR-camera integration
gated feature fusion
Bird's Eye View representation
🔎 Similar Papers
No similar papers found.
N
Nour Alhuda Albashir
Halmstad University, School of Information Technology, Center for Applied Intelligent Systems Research, Halmstad, Sweden
L
Lars Pernickel
Halmstad University, School of Information Technology, Center for Applied Intelligent Systems Research, Halmstad, Sweden
D
Danial Hamoud
Halmstad University, School of Information Technology, Center for Applied Intelligent Systems Research, Halmstad, Sweden
I
Idriss Gouigah
Halmstad University, School of Information Technology, Center for Applied Intelligent Systems Research, Halmstad, Sweden
Eren Erdal Aksoy
Eren Erdal Aksoy
Associate Professor, Lund University
Semantic Event ChainsManipulation ActionsObject-Action RelationsImitation learningSemantic