Revisiting Generative Infrared and Visible Image Fusion Based on Human Cognitive Laws

📅 2025-10-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address key challenges in infrared and visible image fusion—including imbalanced modality representation, weak distribution modeling in generative methods, and lack of interpretability in modality selection—this paper proposes HCLFuse, a cognition-inspired generative fusion framework. HCLFuse is the first to incorporate human cognitive principles into image fusion, featuring a multi-scale masked variational bottleneck encoder and a time-varying physics-guided diffusion model to achieve high-fidelity structural detail reconstruction and enhanced cross-modal semantic consistency. We introduce an information-mapping quantification theory and a physics-driven, interpretable modality selection mechanism. The framework operates in an unsupervised manner without requiring paired training data. Extensive experiments on multiple benchmark datasets demonstrate state-of-the-art performance, with significant improvements in semantic segmentation metrics, validating its robustness and practicality in complex scenarios.

Technology Category

Application Category

📝 Abstract
Existing infrared and visible image fusion methods often face the dilemma of balancing modal information. Generative fusion methods reconstruct fused images by learning from data distributions, but their generative capabilities remain limited. Moreover, the lack of interpretability in modal information selection further affects the reliability and consistency of fusion results in complex scenarios. This manuscript revisits the essence of generative image fusion under the inspiration of human cognitive laws and proposes a novel infrared and visible image fusion method, termed HCLFuse. First, HCLFuse investigates the quantification theory of information mapping in unsupervised fusion networks, which leads to the design of a multi-scale mask-regulated variational bottleneck encoder. This encoder applies posterior probability modeling and information decomposition to extract accurate and concise low-level modal information, thereby supporting the generation of high-fidelity structural details. Furthermore, the probabilistic generative capability of the diffusion model is integrated with physical laws, forming a time-varying physical guidance mechanism that adaptively regulates the generation process at different stages, thereby enhancing the ability of the model to perceive the intrinsic structure of data and reducing dependence on data quality. Experimental results show that the proposed method achieves state-of-the-art fusion performance in qualitative and quantitative evaluations across multiple datasets and significantly improves semantic segmentation metrics. This fully demonstrates the advantages of this generative image fusion method, drawing inspiration from human cognition, in enhancing structural consistency and detail quality.
Problem

Research questions and friction points this paper is trying to address.

Balancing modal information in infrared and visible image fusion
Improving interpretability and reliability of generative fusion methods
Enhancing structural consistency and detail quality in complex scenarios
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multi-scale mask-regulated variational bottleneck encoder extracts modal information
Time-varying physical guidance mechanism adaptively regulates diffusion generation
Human cognitive laws inspired fusion enhances structural consistency and details
🔎 Similar Papers
No similar papers found.
L
Lin Guo
School of Artificial Intelligence and Computer Science, Jiangnan University, Wuxi, China
X
Xiaoqing Luo
School of Artificial Intelligence and Computer Science, Jiangnan University, Wuxi, China
W
Wei Xie
School of Artificial Intelligence and Computer Science, Jiangnan University, Wuxi, China
Z
Zhancheng Zhang
School of Electronic and Information Engineering, Suzhou University of Science and Technology, Suzhou, China
H
Hui Li
School of Artificial Intelligence and Computer Science, Jiangnan University, Wuxi, China
R
Rui Wang
School of Artificial Intelligence and Computer Science, Jiangnan University, Wuxi, China
Z
Zhenhua Feng
School of Artificial Intelligence and Computer Science, Jiangnan University, Wuxi, China
Xiaoning Song
Xiaoning Song
Professor of Computer Vision and Pattern Recognition, Jiangnan University
Pattern RecognitionComputer VisionArtificial Intelligence