DAE-Fuse: An Adaptive Discriminative Autoencoder for Multi-Modality Image Fusion

📅 2024-09-16
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address detail blurring, modality bias, and inter-frame inconsistency in infrared–visible image fusion under nighttime and low-visibility conditions, this paper proposes a two-stage discriminative autoencoding framework. Methodologically: (1) an adaptive discriminative autoencoder (DAE) is designed, integrating dual-stage reconstruction with adversarial learning; (2) a cross-modal feature disentanglement mechanism is introduced to mitigate modality preference; and (3) temporal consistency constraints are imposed—marking the first extension of multimodal image fusion to the video domain. The method achieves state-of-the-art performance across multiple benchmark datasets, demonstrates strong generalization capability, and successfully transfers to medical image fusion. It significantly enhances robust perception for autonomous driving and robotic systems.

Technology Category

Application Category

📝 Abstract
In extreme scenarios such as nighttime or low-visibility environments, achieving reliable perception is critical for applications like autonomous driving, robotics, and surveillance. Multi-modality image fusion, particularly integrating infrared imaging, offers a robust solution by combining complementary information from different modalities to enhance scene understanding and decision-making. However, current methods face significant limitations: GAN-based approaches often produce blurry images that lack fine-grained details, while AE-based methods may introduce bias toward specific modalities, leading to unnatural fusion results. To address these challenges, we propose DAE-Fuse, a novel two-phase discriminative autoencoder framework that generates sharp and natural fused images. Furthermore, We pioneer the extension of image fusion techniques from static images to the video domain while preserving temporal consistency across frames, thus advancing the perceptual capabilities required for autonomous navigation. Extensive experiments on public datasets demonstrate that DAE-Fuse achieves state-of-the-art performance on multiple benchmarks, with superior generalizability to tasks like medical image fusion.
Problem

Research questions and friction points this paper is trying to address.

Image Fusion
Multi-modal Imaging
Autonomous Systems
Innovation

Methods, ideas, or system contributions that make the work stand out.

DAE-Fuse
Multi-modal Image Fusion
Autonomous Driving Enhancement
🔎 Similar Papers
No similar papers found.