GuidNoise: Single-Pair Guided Diffusion for Generalized Noise Synthesis

📅 2025-12-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing image denoising methods rely heavily on camera metadata and large-scale paired noisy-clean image datasets, resulting in poor generalizability. This paper proposes GuidNoise, the first diffusion-based framework for universal noise synthesis requiring only a single noisy-clean image pair. Its core innovations are guidance-aware affine feature modulation (GAFM) and noise-aware refinement loss, enabling accurate modeling of real-world noise distributions without access to camera parameters. GuidNoise supports high-fidelity synthetic noisy image generation across diverse noise conditions and is directly applicable to data augmentation. Experiments demonstrate that its synthesized noise samples significantly improve the performance of lightweight denoising models under low-data regimes, enabling efficient, plug-and-play, self-augmented data expansion.

Technology Category

Application Category

📝 Abstract
Recent image denoising methods have leveraged generative modeling for real noise synthesis to address the costly acquisition of real-world noisy data. However, these generative models typically require camera metadata and extensive target-specific noisy-clean image pairs, often showing limited generalization between settings. In this paper, to mitigate the prerequisites, we propose a Single-Pair Guided Diffusion for generalized noise synthesis GuidNoise, which uses a single noisy/clean pair as the guidance, often easily obtained by itself within a training set. To train GuidNoise, which generates synthetic noisy images from the guidance, we introduce a guidance-aware affine feature modification (GAFM) and a noise-aware refine loss to leverage the inherent potential of diffusion models. This loss function refines the diffusion model's backward process, making the model more adept at generating realistic noise distributions. The GuidNoise synthesizes high-quality noisy images under diverse noise environments without additional metadata during both training and inference. Additionally, GuidNoise enables the efficient generation of noisy-clean image pairs at inference time, making synthetic noise readily applicable for augmenting training data. This self-augmentation significantly improves denoising performance, especially in practical scenarios with lightweight models and limited training data. The code is available at https://github.com/chjinny/GuidNoise.
Problem

Research questions and friction points this paper is trying to address.

Generates synthetic noisy images from single noisy-clean pair
Eliminates need for camera metadata and extensive paired data
Enhances denoising via self-augmentation with lightweight models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Single noisy/clean pair guides diffusion model
Guidance-aware affine feature modification refines synthesis
Noise-aware loss enhances realistic noise distribution generation
🔎 Similar Papers
No similar papers found.