🤖 AI Summary
Existing diffusion models struggle to simultaneously achieve high visual quality and signal fidelity in Gaussian denoising. To address this, we propose Linearly Combined Diffusion Denoisers (LCDD), the first framework to instantiate a dual-path inference mechanism—comprising generative guidance and signal fidelity preservation—governed by a single tunable scalar hyperparameter that explicitly balances the two objectives without fine-tuning or retraining. Built upon pre-trained diffusion models, LCDD introduces noise-residual modeling and linear-weighted fusion to enable efficient, stable, and controllable inference. Extensive experiments across multiple benchmark datasets demonstrate that LCDD significantly outperforms conventional denoising methods and state-of-the-art diffusion-based approaches, achieving new SOTA performance with substantial PSNR and SSIM improvements.
📝 Abstract
Diffusion models have garnered considerable interest in computer vision, owing both to their capacity to synthesize photorealistic images and to their proven effectiveness in image reconstruction tasks. However, existing approaches fail to efficiently balance the high visual quality of diffusion models with the low distortion achieved by previous image reconstruction methods. Specifically, for the fundamental task of additive Gaussian noise removal, we first illustrate an intuitive method for leveraging pretrained diffusion models. Further, we introduce our proposed Linear Combination Diffusion Denoiser (LCDD), which unifies two complementary inference procedures - one that leverages the model's generative potential and another that ensures faithful signal recovery. By exploiting the inherent structure of the denoising samples, LCDD achieves state-of-the-art performance and offers controlled, well-behaved trade-offs through a simple scalar hyperparameter adjustment.