MagicFuse: Single Image Fusion for Visual and Semantic Reinforcement

📅 2026-02-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work proposes a novel single-image cross-spectral fusion paradigm under the challenging setting of relying solely on a single low-quality visible-light image. Leveraging a diffusion model, the approach employs a dual-branch architecture that integrates intra-visible-domain knowledge enhancement with cross-spectral (infrared) knowledge generation, further refined through a multi-domain knowledge fusion mechanism. The representation is optimized under dual constraints of visual fidelity and semantic consistency. Notably, this method extends data-level fusion to the knowledge level for the first time, enabling high-quality cross-spectral representation synthesis without requiring real infrared inputs. The resulting representations achieve visual quality and downstream task performance comparable to—or even surpassing—that of state-of-the-art methods which depend on multimodal inputs.

Technology Category

Application Category

📝 Abstract
This paper focuses on a highly practical scenario: how to continue benefiting from the advantages of multi-modal image fusion under harsh conditions when only visible imaging sensors are available. To achieve this goal, we propose a novel concept of single-image fusion, which extends conventional data-level fusion to the knowledge level. Specifically, we develop MagicFuse, a novel single image fusion framework capable of deriving a comprehensive cross-spectral scene representation from a single low-quality visible image. MagicFuse first introduces an intra-spectral knowledge reinforcement branch and a cross-spectral knowledge generation branch based on the diffusion models. They mine scene information obscured in the visible spectrum and learn thermal radiation distribution patterns transferred to the infrared spectrum, respectively. Building on them, we design a multi-domain knowledge fusion branch that integrates the probabilistic noise from the diffusion streams of these two branches, from which a cross-spectral scene representation can be obtained through successive sampling. Then, we impose both visual and semantic constraints to ensure that this scene representation can satisfy human observation while supporting downstream semantic decision-making. Extensive experiments show that our MagicFuse achieves visual and semantic representation performance comparable to or even better than state-of-the-art fusion methods with multi-modal inputs, despite relying solely on a single degraded visible image.
Problem

Research questions and friction points this paper is trying to address.

single-image fusion
cross-spectral representation
visible image
multi-modal fusion
semantic reinforcement
Innovation

Methods, ideas, or system contributions that make the work stand out.

single-image fusion
knowledge-level fusion
diffusion models
cross-spectral representation
visual-semantic constraints
🔎 Similar Papers
No similar papers found.