Interactive Spatial-Frequency Fusion Mamba for Multi-Modal Image Fusion

πŸ“… 2026-02-04
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work addresses the limited interaction between spatial and frequency information in existing multimodal image fusion methods, which often results in insufficient detail preservation and constrained cross-modal complementarity. To overcome this, we propose an interactive spatial-frequency fusion Mamba framework that introduces, for the first time, a dynamic mechanism wherein frequency-domain information adaptively guides spatial feature fusion. Leveraging the Mamba state space model, our approach efficiently captures long-range dependencies with linear computational complexity. The architecture integrates modality-specific extractors, multi-scale frequency decomposition, and an adaptive fusion module to effectively harness complementary cross-modal information. Extensive experiments demonstrate that our method significantly outperforms current state-of-the-art approaches across six benchmark datasets, achieving notable improvements in both detail retention and overall information fidelity.

Technology Category

Application Category

πŸ“ Abstract
Multi-Modal Image Fusion (MMIF) aims to combine images from different modalities to produce fused images, retaining texture details and preserving significant information. Recently, some MMIF methods incorporate frequency domain information to enhance spatial features. However, these methods typically rely on simple serial or parallel spatial-frequency fusion without interaction. In this paper, we propose a novel Interactive Spatial-Frequency Fusion Mamba (ISFM) framework for MMIF. Specifically, we begin with a Modality-Specific Extractor (MSE) to extract features from different modalities. It models long-range dependencies across the image with linear computational complexity. To effectively leverage frequency information, we then propose a Multi-scale Frequency Fusion (MFF). It adaptively integrates low-frequency and high-frequency components across multiple scales, enabling robust representations of frequency features. More importantly, we further propose an Interactive Spatial-Frequency Fusion (ISF). It incorporates frequency features to guide spatial features across modalities, enhancing complementary representations. Extensive experiments are conducted on six MMIF datasets. The experimental results demonstrate that our ISFM can achieve better performances than other state-of-the-art methods. The source code is available at https://github.com/Namn23/ISFM.
Problem

Research questions and friction points this paper is trying to address.

Multi-Modal Image Fusion
Spatial-Frequency Fusion
Image Fusion
Frequency Domain
Feature Interaction
Innovation

Methods, ideas, or system contributions that make the work stand out.

Interactive Spatial-Frequency Fusion
Mamba
Multi-Modal Image Fusion
Multi-scale Frequency Fusion
Modality-Specific Extractor
πŸ”Ž Similar Papers
No similar papers found.
Yixin Zhu
Yixin Zhu
Assistant Professor, Peking University
Computer VisionVisual ReasoningHuman-Robot Teaming
L
Long Lv
Affiliated Zhongshan Hospital of Dalian University
P
Pingping Zhang
School of Future Technology, Dalian University of Technology and the Key Laboratory of Data Science and Smart Education (Hainan Normal University), Ministry of Education
Xuehu Liu
Xuehu Liu
武汉理ε·₯ε€§ε­¦οΌŒε€§θΏžη†ε·₯ε€§ε­¦
T
Tongdan Tang
Central Hospital of Dalian University of Technology
F
Feng Tian
Affiliated Zhongshan Hospital of Dalian University
W
Weibing Sun
Affiliated Zhongshan Hospital of Dalian University
H
Huchuan Lu
School of Information and Communication Engineering, Dalian University of Technology