🤖 AI Summary
In self-supervised real-image denoising, two key challenges persist: (1) blind-spot networks (BSNs) suffer from spatial independence assumptions, leading to loss of fine local details and pixel-level discontinuities; and (2) diffusion models struggle to adapt to self-supervised paradigms due to their reliance on clean-noisy pairs. To address these, we propose the Dual-Branch Diffusion Denoiser (DBDD), a novel framework that jointly couples a BSN-guided branch—providing noise-aware prior for sampling—with a standard diffusion branch—capturing full spatial correlations. This design enables simultaneous structural fidelity preservation and precise noise modeling without paired data. We adopt a fully self-supervised training strategy and achieve state-of-the-art performance on SIDD and DND benchmarks, significantly outperforming existing self-supervised methods. Our code and pre-trained models are publicly available.
📝 Abstract
In this work, we present Blind-Spot Guided Diffusion, a novel self-supervised framework for real-world image denoising. Our approach addresses two major challenges: the limitations of blind-spot networks (BSNs), which often sacrifice local detail and introduce pixel discontinuities due to spatial independence assumptions, and the difficulty of adapting diffusion models to self-supervised denoising. We propose a dual-branch diffusion framework that combines a BSN-based diffusion branch, generating semi-clean images, with a conventional diffusion branch that captures underlying noise distributions. To enable effective training without paired data, we use the BSN-based branch to guide the sampling process, capturing noise structure while preserving local details. Extensive experiments on the SIDD and DND datasets demonstrate state-of-the-art performance, establishing our method as a highly effective self-supervised solution for real-world denoising. Code and pre-trained models are released at: https://github.com/Sumching/BSGD.