🤖 AI Summary
This work addresses the challenge in multimodal image fusion where overemphasizing infrared intensity often leads to loss of visible-light details, while preserving visible structures may diminish thermal target saliency. To this end, the authors propose a difference-driven channel-spatial joint state space fusion mechanism. The approach leverages an inter-modal feature difference map to guide the fusion process: in the channel dimension, a cross-attention dual-state space model enables adaptive reweighting, while in the spatial dimension, a cross-modal state space scanning strategy facilitates global complementary integration. Notably, this is the first method to employ feature difference maps for guiding multimodal fusion, achieving global dependency modeling with linear computational complexity. Extensive experiments on driving scenes and low-altitude UAV datasets demonstrate superior visual quality and quantitative performance compared to existing methods.
📝 Abstract
Multi-modal image fusion aims to integrate complementary information from multiple source images to produce high-quality fused images with enriched content. Although existing approaches based on state space model have achieved satisfied performance with high computational efficiency, they tend to either over-prioritize infrared intensity at the cost of visible details, or conversely, preserve visible structure while diminishing thermal target salience. To overcome these challenges, we propose DIFF-MF, a novel difference-driven channel-spatial state space model for multi-modal image fusion. Our approach leverages feature discrepancy maps between modalities to guide feature extraction, followed by a fusion process across both channel and spatial dimensions. In the channel dimension, a channel-exchange module enhances channel-wise interaction through cross-attention dual state space modeling, enabling adaptive feature reweighting. In the spatial dimension, a spatial-exchange module employs cross-modal state space scanning to achieve comprehensive spatial fusion. By efficiently capturing global dependencies while maintaining linear computational complexity, DIFF-MF effectively integrates complementary multi-modal features. Experimental results on the driving scenarios and low-altitude UAV datasets demonstrate that our method outperforms existing approaches in both visual quality and quantitative evaluation.