Degradation-Robust Fusion: An Efficient Degradation-Aware Diffusion Framework for Multimodal Image Fusion in Arbitrary Degradation Scenarios

๐Ÿ“… 2026-04-10
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
This work addresses the limitations of existing image fusion methods, which often struggle under complex degradations such as noise, blur, and low resolution, while also failing to balance interpretability and generalization. The authors propose a degradation-aware diffusion-based fusion frameworkโ€”the first to apply diffusion models to multimodal image fusion in the absence of natural fusion labels. By implicitly denoising to directly regress the fused output and incorporating a joint observation model correction mechanism during sampling, the method simultaneously enforces degradation and fusion constraints, enabling flexible adaptation to arbitrary degradation scenarios. Coupled with a few-step sampling strategy, the approach achieves significant computational efficiency. Extensive experiments demonstrate that the proposed method consistently outperforms state-of-the-art techniques across diverse fusion tasks and challenging degradation settings, offering superior robustness and reconstruction accuracy.

Technology Category

Application Category

๐Ÿ“ Abstract
Complex degradations like noise, blur, and low resolution are typical challenges in real world image fusion tasks, limiting the performance and practicality of existing methods. End to end neural network based approaches are generally simple to design and highly efficient in inference, but their black-box nature leads to limited interpretability. Diffusion based methods alleviate this to some extent by providing powerful generative priors and a more structured inference process. However, they are trained to learn a single domain target distribution, whereas fusion lacks natural fused data and relies on modeling complementary information from multiple sources, making diffusion hard to apply directly in practice. To address these challenges, this paper proposes an efficient degradation aware diffusion framework for image fusion under arbitrary degradation scenarios. Specifically, instead of explicitly predicting noise as in conventional diffusion models, our method performs implicit denoising by directly regressing the fused image, enabling flexible adaptation to diverse fusion tasks under complex degradations with limited steps. Moreover, we design a joint observation model correction mechanism that simultaneously imposes degradation and fusion constraints during sampling to ensure high reconstruction accuracy. Experiments on diverse fusion tasks and degradation configurations demonstrate the superiority of the proposed method under complex degradation scenarios.
Problem

Research questions and friction points this paper is trying to address.

image fusion
degradation
diffusion models
multimodal
real-world scenarios
Innovation

Methods, ideas, or system contributions that make the work stand out.

degradation-aware diffusion
implicit denoising
multimodal image fusion
joint observation model correction
arbitrary degradation scenarios