AW-MoE: All-Weather Mixture of Experts for Robust Multi-Modal 3D Object Detection

📅 2026-03-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the significant performance degradation of existing 3D object detection methods under adverse weather conditions, primarily caused by ignoring distributional shifts across different weather scenarios. To tackle this, the authors propose AW-MoE, a novel framework that introduces the Mixture-of-Experts (MoE) mechanism into all-weather multimodal 3D detection for the first time. AW-MoE features an image-guided weather-aware routing (IWR) module that dynamically activates weather-specific experts (WSEs), coupled with a unified dual-modality augmentation (UDMA) strategy to disentangle weather-induced interference while preserving scene fidelity. By effectively fusing LiDAR and 4D radar data, the method achieves approximately a 15% improvement over state-of-the-art approaches on real-world datasets under challenging weather conditions, with negligible inference overhead and seamless compatibility with mainstream detectors to substantially enhance robustness.

Technology Category

Application Category

📝 Abstract
Robust 3D object detection under adverse weather conditions is crucial for autonomous driving. However, most existing methods simply combine all weather samples for training while overlooking data distribution discrepancies across different weather scenarios, leading to performance conflicts. To address this issue, we introduce AW-MoE, the framework that innovatively integrates Mixture of Experts (MoE) into weather-robust multi-modal 3D object detection approaches. AW-MoE incorporates Image-guided Weather-aware Routing (IWR), which leverages the superior discriminability of image features across weather conditions and their invariance to scene variations for precise weather classification. Based on this accurate classification, IWR selects the top-K most relevant Weather-Specific Experts (WSE) that handle data discrepancies, ensuring optimal detection under all weather conditions. Additionally, we propose a Unified Dual-Modal Augmentation (UDMA) for synchronous LiDAR and 4D Radar dual-modal data augmentation while preserving the realism of scenes. Extensive experiments on the real-world dataset demonstrate that AW-MoE achieves ~ 15% improvement in adverse-weather performance over state-of-the-art methods, while incurring negligible inference overhead. Moreover, integrating AW-MoE into established baseline detectors yields performance improvements surpassing current state-of-the-art methods. These results show the effectiveness and strong scalability of our AW-MoE. We will release the code publicly at https://github.com/windlinsherlock/AW-MoE.
Problem

Research questions and friction points this paper is trying to address.

3D object detection
adverse weather
multi-modal
data distribution discrepancy
weather robustness
Innovation

Methods, ideas, or system contributions that make the work stand out.

Mixture of Experts
Weather-Robust 3D Detection
Image-guided Routing
Dual-Modal Augmentation
Multi-Modal Fusion
🔎 Similar Papers
No similar papers found.