🤖 AI Summary
Existing infrared–visible image fusion methods suffer from insufficient deep semantic understanding, leading to critical target loss, artifacts, and structural distortions. To address this, we propose the first semantic-aware fusion framework integrating Segment Anything Model (SAM) semantic priors with a conditional diffusion model. Our method first leverages SAM to generate high-fidelity semantic masks, serving as explicit priors to guide initial multimodal feature fusion. Subsequently, these masks—alongside coarse fusion results—condition a two-stage diffusion process that refines the output with semantic consistency and faithful detail preservation. Extensive experiments demonstrate state-of-the-art performance across both quantitative metrics (e.g., EN, SD, QAB/F) and qualitative assessments, while downstream tasks—including object detection and segmentation—show significant improvements in robustness and accuracy. The framework substantially enhances semantic plausibility and practical utility of fused imagery. Code is publicly available.
📝 Abstract
Infrared and visible image fusion (IVIF) aims to combine the thermal radiation information from infrared images with the rich texture details from visible images to enhance perceptual capabilities for downstream visual tasks. However, existing methods often fail to preserve key targets due to a lack of deep semantic understanding of the scene, while the fusion process itself can also introduce artifacts and detail loss, severely compromising both image quality and task performance. To address these issues, this paper proposes SGDFuse, a conditional diffusion model guided by the Segment Anything Model (SAM), to achieve high-fidelity and semantically-aware image fusion. The core of our method is to utilize high-quality semantic masks generated by SAM as explicit priors to guide the optimization of the fusion process via a conditional diffusion model. Specifically, the framework operates in a two-stage process: it first performs a preliminary fusion of multi-modal features, and then utilizes the semantic masks from SAM jointly with the preliminary fused image as a condition to drive the diffusion model's coarse-to-fine denoising generation. This ensures the fusion process not only has explicit semantic directionality but also guarantees the high fidelity of the final result. Extensive experiments demonstrate that SGDFuse achieves state-of-the-art performance in both subjective and objective evaluations, as well as in its adaptability to downstream tasks, providing a powerful solution to the core challenges in image fusion. The code of SGDFuse is available at https://github.com/boshizhang123/SGDFuse.