E2E-MFD: Towards End-to-End Synchronous Multimodal Fusion Detection

📅 2024-03-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the suboptimal solutions arising from sequential, stage-wise training in multimodal image fusion and object detection, this paper proposes the first end-to-end, synchronized multimodal fusion detection framework tailored for autonomous driving—jointly optimizing infrared/visible-light fusion and detection. Methodologically, we design a differentiable, synchronized joint-optimization architecture featuring a gradient-matrix-constrained shared-parameter update mechanism and a dual-modal feature alignment fusion module, enabling single-stage co-training. Our core innovations lie in the synchronized joint optimization mechanism and gradient-level parameter sharing strategy, which eliminate performance bottlenecks caused by task decoupling. Evaluated on M3FD and DroneVehicle benchmarks, our method achieves state-of-the-art (SOTA) performance, improving mAP₅₀ by 3.9% and 2.0%, respectively, while simultaneously enhancing both fused image quality and detection accuracy.

Technology Category

Application Category

📝 Abstract
Multimodal image fusion and object detection are crucial for autonomous driving. While current methods have advanced the fusion of texture details and semantic information, their complex training processes hinder broader applications. Addressing this challenge, we introduce E2E-MFD, a novel end-to-end algorithm for multimodal fusion detection. E2E-MFD streamlines the process, achieving high performance with a single training phase. It employs synchronous joint optimization across components to avoid suboptimal solutions tied to individual tasks. Furthermore, it implements a comprehensive optimization strategy in the gradient matrix for shared parameters, ensuring convergence to an optimal fusion detection configuration. Our extensive testing on multiple public datasets reveals E2E-MFD's superior capabilities, showcasing not only visually appealing image fusion but also impressive detection outcomes, such as a 3.9% and 2.0% mAP50 increase on horizontal object detection dataset M3FD and oriented object detection dataset DroneVehicle, respectively, compared to state-of-the-art approaches. The code is released at https://github.com/icey-zhang/E2E-MFD.
Problem

Research questions and friction points this paper is trying to address.

Multimodal Image Fusion
Object Detection
End-to-End Training
Innovation

Methods, ideas, or system contributions that make the work stand out.

E2E-MFD
Multi-modal Image Fusion
Object Detection
🔎 Similar Papers
No similar papers found.
Jiaqing Zhang
Jiaqing Zhang
University of Science and Technology of China
Recommender SystemData-Centric AI
M
Mingxiang Cao
The State Key Laboratory of Integrated Services Networks, Xidian University
Weiying Xie
Weiying Xie
Xidian University
remote image processingdeep learningtarget detectionanomaly detection
Jie Lei
Jie Lei
Universitat Politècnica de València
Computer EngineeringElectronic engineering
D
Daixun Li
The State Key Laboratory of Integrated Services Networks, Xidian University
Wenbo Huang
Wenbo Huang
Southeast University | Institute of Science Tokyo
Video AnalysisMultimediaUbiquitous Computing
Y
Yunsong Li
The State Key Laboratory of Integrated Services Networks, Xidian University
X
Xue Yang
Shanghai AI Laboratory
G
Geng Yang