Task-Generalized Adaptive Cross-Domain Learning for Multimodal Image Fusion

📅 2025-08-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address key bottlenecks in multimodal image fusion—including modality misalignment, high-frequency detail loss, and strong task specificity—this paper proposes AdaSFFuse, a task-agnostic adaptive cross-domain synergistic fusion framework. Methodologically, it introduces: (1) an adaptive approximate wavelet transform (AdaWAT) for learnable frequency decomposition; (2) a spatial-frequency Mamba module enabling dynamic cross-domain feature alignment and collaborative modeling; and (3) a learnable mapping mechanism unifying adaptation across diverse tasks—including infrared–visible, multi-focus, multi-exposure, and medical image fusion. Extensive experiments demonstrate that AdaSFFuse achieves state-of-the-art performance on all four major fusion benchmarks, delivering superior fidelity, low computational overhead (only 1.2M parameters), and strong generalizability—significantly outperforming existing approaches.

Technology Category

Application Category

📝 Abstract
Multimodal Image Fusion (MMIF) aims to integrate complementary information from different imaging modalities to overcome the limitations of individual sensors. It enhances image quality and facilitates downstream applications such as remote sensing, medical diagnostics, and robotics. Despite significant advancements, current MMIF methods still face challenges such as modality misalignment, high-frequency detail destruction, and task-specific limitations. To address these challenges, we propose AdaSFFuse, a novel framework for task-generalized MMIF through adaptive cross-domain co-fusion learning. AdaSFFuse introduces two key innovations: the Adaptive Approximate Wavelet Transform (AdaWAT) for frequency decoupling, and the Spatial-Frequency Mamba Blocks for efficient multimodal fusion. AdaWAT adaptively separates the high- and low-frequency components of multimodal images from different scenes, enabling fine-grained extraction and alignment of distinct frequency characteristics for each modality. The Spatial-Frequency Mamba Blocks facilitate cross-domain fusion in both spatial and frequency domains, enhancing this process. These blocks dynamically adjust through learnable mappings to ensure robust fusion across diverse modalities. By combining these components, AdaSFFuse improves the alignment and integration of multimodal features, reduces frequency loss, and preserves critical details. Extensive experiments on four MMIF tasks -- Infrared-Visible Image Fusion (IVF), Multi-Focus Image Fusion (MFF), Multi-Exposure Image Fusion (MEF), and Medical Image Fusion (MIF) -- demonstrate AdaSFFuse's superior fusion performance, ensuring both low computational cost and a compact network, offering a strong balance between performance and efficiency. The code will be publicly available at https://github.com/Zhen-yu-Liu/AdaSFFuse.
Problem

Research questions and friction points this paper is trying to address.

Addressing modality misalignment in multimodal image fusion tasks
Solving high-frequency detail destruction across different imaging modalities
Overcoming task-specific limitations in generalized image fusion applications
Innovation

Methods, ideas, or system contributions that make the work stand out.

Adaptive Approximate Wavelet Transform for frequency decoupling
Spatial-Frequency Mamba Blocks for cross-domain fusion
Task-generalized framework with adaptive multimodal learning
🔎 Similar Papers
No similar papers found.
M
Mengyu Wang
Key Laboratory of Opto-Electronic Information Science and Technology of Jiangxi Province, Nanchang Hangkong University, Nanchang, Jiangxi, 330063, China; Key Laboratory of Nondestructive Test (Ministry of Education), Nanchang Hangkong University, Nanchang, Jiangxi, 330063, China
Z
Zhenyu Liu
Key Laboratory of Opto-Electronic Information Science and Technology of Jiangxi Province, Nanchang Hangkong University, Nanchang, Jiangxi, 330063, China
K
Kun Li
ReLER, CCAI, Zhejiang University, Hangzhou, 310027, China
Y
Yu Wang
College of Engineering, Anhui Agricultural University, Hefei, 230036, China
Y
Yuwei Wang
College of Engineering, Anhui Agricultural University, Hefei, 230036, China
Yanyan Wei
Yanyan Wei
Hefei University of Technology (HFUT)
Robust Image PerceptionLLMAI Agent
F
Fei Wang
School of Computer Science and Information Engineering, Hefei University of Technology, Hefei, 230601, China