SGDFuse: SAM-Guided Diffusion for High-Fidelity Infrared and Visible Image Fusion

📅 2025-08-07
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing infrared–visible image fusion methods suffer from insufficient deep semantic understanding, leading to critical target loss, artifacts, and structural distortions. To address this, we propose the first semantic-aware fusion framework integrating Segment Anything Model (SAM) semantic priors with a conditional diffusion model. Our method first leverages SAM to generate high-fidelity semantic masks, serving as explicit priors to guide initial multimodal feature fusion. Subsequently, these masks—alongside coarse fusion results—condition a two-stage diffusion process that refines the output with semantic consistency and faithful detail preservation. Extensive experiments demonstrate state-of-the-art performance across both quantitative metrics (e.g., EN, SD, QAB/F) and qualitative assessments, while downstream tasks—including object detection and segmentation—show significant improvements in robustness and accuracy. The framework substantially enhances semantic plausibility and practical utility of fused imagery. Code is publicly available.

Technology Category

Application Category

📝 Abstract
Infrared and visible image fusion (IVIF) aims to combine the thermal radiation information from infrared images with the rich texture details from visible images to enhance perceptual capabilities for downstream visual tasks. However, existing methods often fail to preserve key targets due to a lack of deep semantic understanding of the scene, while the fusion process itself can also introduce artifacts and detail loss, severely compromising both image quality and task performance. To address these issues, this paper proposes SGDFuse, a conditional diffusion model guided by the Segment Anything Model (SAM), to achieve high-fidelity and semantically-aware image fusion. The core of our method is to utilize high-quality semantic masks generated by SAM as explicit priors to guide the optimization of the fusion process via a conditional diffusion model. Specifically, the framework operates in a two-stage process: it first performs a preliminary fusion of multi-modal features, and then utilizes the semantic masks from SAM jointly with the preliminary fused image as a condition to drive the diffusion model's coarse-to-fine denoising generation. This ensures the fusion process not only has explicit semantic directionality but also guarantees the high fidelity of the final result. Extensive experiments demonstrate that SGDFuse achieves state-of-the-art performance in both subjective and objective evaluations, as well as in its adaptability to downstream tasks, providing a powerful solution to the core challenges in image fusion. The code of SGDFuse is available at https://github.com/boshizhang123/SGDFuse.
Problem

Research questions and friction points this paper is trying to address.

Enhances infrared and visible image fusion with semantic understanding
Reduces artifacts and detail loss in fused images
Improves image quality and downstream task performance
Innovation

Methods, ideas, or system contributions that make the work stand out.

SAM-guided diffusion for semantic-aware fusion
Two-stage coarse-to-fine denoising generation
Semantic masks as explicit priors for optimization
🔎 Similar Papers
No similar papers found.
Xiaoyang Zhang
Xiaoyang Zhang
Display Engineer, Apple Inc.
Display/OpticsAR/VRMEMS sensor/actuatorOptical MEMSInertial sensors
Z
Zhen Hua
School of Computer Science and Technology, Shandong Technology and Business University, Yantai 264005, China
Yakun Ju
Yakun Ju
Assistant Professor, University of Leicester, UK
Computational PhotographyUnderwater VisionImage Processing
W
Wei Zhou
School of Computer Science and Informatics, Cardiff University, Cardiff, CF103AT, United Kingdom
J
Jun Liu
School of Computing and Communications, Lancaster University, Eicester, LA20LJ, United Kingdom
A
Alex C. Kot
Director of the Rapid-Rich Object Search (ROSE) Laboratory and the NTU-PKU Joint Research Institute, Nanyang Technological University, 50 Nanyang Avenue, 639798, Singapore