CAWM-Mamba: A unified model for infrared-visible image fusion and compound adverse weather restoration

📅 2026-03-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing methods struggle to effectively fuse infrared and visible images degraded by compound adverse weather conditions, such as fog combined with rain or rain mixed with snow. This work proposes CAWM-Mamba, an end-to-end unified framework that, for the first time, jointly models image fusion and restoration from compound weather degradations. The framework introduces a frequency-domain State Space Model (Freq-SSM), which employs Wavelet State Space Blocks (WSSB) to decouple multi-frequency degradation components. It further integrates a Weather-Aware Preprocessing Module (WAPM) and a Cross-modal Feature Interaction Module (CFIM) to construct a unified degradation representation, thereby enhancing generalization. Evaluated on the AWMM-100K dataset and three standard benchmarks, the proposed method significantly outperforms state-of-the-art approaches, yielding fused images that substantially improve downstream semantic segmentation and object detection performance.

Technology Category

Application Category

📝 Abstract
Multimodal Image Fusion (MMIF) integrates complementary information from various modalities to produce clearer and more informative fused images. MMIF under adverse weather is particularly crucial in autonomous driving and UAV monitoring applications. However, existing adverse weather fusion methods generally only tackle single types of degradation such as haze, rain, or snow, and fail when multiple degradations coexist (e.g., haze+rain, rain+snow). To address this challenge, we propose Compound Adverse Weather Mamba (CAWM-Mamba), the first end-to-end framework that jointly performs image fusion and compound weather restoration with unified shared weights. Our network contains three key components: (1) a Weather-Aware Preprocess Module (WAPM) to enhance degraded visible features and extracts global weather embeddings; (2) a Cross-modal Feature Interaction Module (CFIM) to facilitate the alignment of heterogeneous modalities and exchange of complementary features across modalities; and (3) a Wavelet Space State Block (WSSB) that leverages wavelet-domain decomposition to decouple multi-frequency degradations. WSSB includes Freq-SSM, a module that models anisotropic high-frequency degradation without redundancy, and a unified degradation representation mechanism to further improve generalization across complex compound weather conditions. Extensive experiments on the AWMM-100K benchmark and three standard fusion datasets demonstrate that CAWM-Mamba consistently outperforms state-of-the-art methods in both compound and single-weather scenarios. In addition, our fusion results excel in downstream tasks covering semantic segmentation and object detection, confirming the practical value in real-world adverse weather perception. The source code will be available at https://github.com/Feecuin/CAWM-Mamba.
Problem

Research questions and friction points this paper is trying to address.

infrared-visible image fusion
compound adverse weather
multimodal image fusion
image restoration
degradation coexistence
Innovation

Methods, ideas, or system contributions that make the work stand out.

infrared-visible image fusion
compound adverse weather restoration
Mamba architecture
wavelet-domain decomposition
cross-modal feature interaction
🔎 Similar Papers
No similar papers found.
H
Huichun Liu
School of Physics and Optoelectronic Engineering, Foshan University, Foshan 528225, China
Xiaosong Li
Xiaosong Li
Foshan University
Image fusioncomputer visionpattern recognition
Z
Zhuangfan Huang
School of Physics and Optoelectronic Engineering, Foshan University, Foshan 528225, China
Tao Ye
Tao Ye
NWPU (China)
MicrosystemsNanofabricationPhotovoltaic
Y
Yang Liu
School of Physics and Optoelectronic Engineering, Foshan University, Foshan 528225, China
H
Haishu Tan
School of Physics and Optoelectronic Engineering, Foshan University, Foshan 528225, China