Multi-Modal Decouple and Recouple Network for Robust 3D Object Detection

πŸ“… 2026-03-08
πŸ›οΈ IEEE transactions on circuits and systems for video technology (Print)
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work addresses the significant performance degradation of existing tightly coupled multi-modal BEV fusion methods when LiDAR or camera data is corrupted. To enhance robustness, the authors propose a decouple-and-recouple framework that explicitly decomposes multi-modal BEV features into modality-invariant and modality-specific components, enabling cross-modal compensation through invariant representations. A three-expert network is designed to handle LiDAR-only, camera-only, and dual-modality corruption scenarios, complemented by an adaptive fusion mechanism that dynamically integrates information based on input conditions. Evaluated on an extended nuScenes corruption benchmark, the method achieves state-of-the-art detection accuracy across both clean and various corrupted settings, marking the first approach to explicitly decouple and robustly fuse multi-modal BEV features.

Technology Category

Application Category

πŸ“ Abstract
Multi-modal 3D object detection with bird's eye view (BEV) has achieved desired advances on benchmarks. Nonetheless, the accuracy may drop significantly in the real world due to data corruption such as sensor configurations for LiDAR and scene conditions for camera. One design bottleneck of previous models resides in the tightly coupling of multi-modal BEV features during fusion, which may degrade the overall system performance if one modality or both is corrupted. To mitigate, we propose a Multi-Modal Decouple and Recouple Network for robust 3D object detection under data corruption. Different modalities commonly share some high-level invariant features. We observe that these invariant features across modalities do not always fail simultaneously, because different types of data corruption affect each modality in distinct ways.These invariant features can be recovered across modalities for robust fusion under data corruption.To this end, we explicitly decouple Camera/LiDAR BEV features into modality-invariant and modality-specific parts. It allows invariant features to compensate each other while mitigates the negative impact of a corrupted modality on the other.We then recouple these features into three experts to handle different types of data corruption, respectively, i.e., LiDAR, camera, and both.For each expert, we use modality-invariant features as robust information, while modality-specific features serve as a complement.Finally, we adaptively fuse the three experts to exact robust features for 3D object detection. For validation, we collect a benchmark with a large quantity of data corruption for LiDAR, camera, and both based on nuScenes. Our model is trained on clean nuScenes and tested on all types of data corruption. Our model consistently achieves the best accuracy on both corrupted and clean data compared to recent models.
Problem

Research questions and friction points this paper is trying to address.

multi-modal
3D object detection
data corruption
sensor fusion
robustness
Innovation

Methods, ideas, or system contributions that make the work stand out.

multi-modal fusion
feature decoupling
modality-invariant representation
robust 3D object detection
BEV perception
πŸ”Ž Similar Papers
No similar papers found.
Rui Ding
Rui Ding
Principal Researcher, Microsoft
Causal DiscoveryCausal InferenceAdvanced Data Analysis
Z
Zhaonian Kuang
State Key Laboratory of Human-Machine Hybrid Augmented Intelligence, Institute of Artificial Intelligence and Robotics, Xi’an Jiaotong University, Xi’an, P.R. China
Y
Yuzhe Ji
Intelligent Transportation Thrust of the Systems Hub, The Hong Kong University of Science and Technology (Guangzhou), P.R. China
Meng Yang
Meng Yang
Associate Professor, Southwest Jiaotong University
Artificial IntelligenceReinforcement LearningComputer VisionSequence Design
Xinhu Zheng
Xinhu Zheng
Assistant Professor, The Hong Kong University of Science and Technology (Guangzhou)
Gang Hua
Gang Hua
Director of Applied Science, AI, Amazon.com, Inc., IEEE & IAPR Fellow
Computer VisionMachine LearningArtificial IntelligenceRoboticsMultimedia