Antithetic Noise in Diffusion Models

📅 2025-06-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the lack of adversarial modeling in initial noise within diffusion models. We propose a zero-overhead reverse-noise pairing mechanism: leveraging the initial noise and its negation to generate strongly negatively correlated samples, thereby enhancing generation diversity and uncertainty quantification. Our contributions are threefold: (1) We formulate and theoretically validate the Score Function Approximate Affine Odd-Symmetry Conjecture, establishing its theoretical foundation for negative-correlation sampling; (2) We design a training-agnostic and model-agnostic reverse-noise pairing framework that requires no architectural modification or retraining; (3) We extend the method to randomized quasi-Monte Carlo estimation, significantly improving statistical inference efficiency. Empirical evaluation on mainstream models—including Stable Diffusion—demonstrates improved image diversity without quality degradation, an average 90% reduction in downstream task confidence interval widths, and systematic gains in uncertainty estimation accuracy—all achieved with zero computational overhead.

Technology Category

Application Category

📝 Abstract
We initiate a systematic study of antithetic initial noise in diffusion models. Across unconditional models trained on diverse datasets, text-conditioned latent-diffusion models, and diffusion-posterior samplers, we find that pairing each initial noise with its negation consistently yields strongly negatively correlated samples. To explain this phenomenon, we combine experiments and theoretical analysis, leading to a symmetry conjecture that the learned score function is approximately affine antisymmetric (odd symmetry up to a constant shift), and provide evidence supporting it. Leveraging this negative correlation, we enable two applications: (1) enhancing image diversity in models like Stable Diffusion without quality loss, and (2) sharpening uncertainty quantification (e.g., up to 90% narrower confidence intervals) when estimating downstream statistics. Building on these gains, we extend the two-point pairing to a randomized quasi-Monte Carlo estimator, which further improves estimation accuracy. Our framework is training-free, model-agnostic, and adds no runtime overhead.
Problem

Research questions and friction points this paper is trying to address.

Studying antithetic noise in diffusion models
Enhancing image diversity without quality loss
Improving uncertainty quantification in downstream statistics
Innovation

Methods, ideas, or system contributions that make the work stand out.

Antithetic noise pairing for negative correlation
Affine antisymmetric score function conjecture
Quasi-Monte Carlo estimator for accuracy
🔎 Similar Papers
No similar papers found.
J
Jing Jia
Department of Computer Science, Rutgers University
Sifan Liu
Sifan Liu
Duke University
B
Bowen Song
Department of EECS, University of Michigan
W
Wei Yuan
Department of Statistics, Rutgers University
Liyue Shen
Liyue Shen
University of Michigan
computer visionmachine learningsignal/image processingmedical imagingmedical image analysis
Guanyang Wang
Guanyang Wang
Rutgers University
StatisticsProbabilityMarkov Chain Monte CarloMachine Learning