🤖 AI Summary
This work addresses the lack of adversarial modeling in initial noise within diffusion models. We propose a zero-overhead reverse-noise pairing mechanism: leveraging the initial noise and its negation to generate strongly negatively correlated samples, thereby enhancing generation diversity and uncertainty quantification. Our contributions are threefold: (1) We formulate and theoretically validate the Score Function Approximate Affine Odd-Symmetry Conjecture, establishing its theoretical foundation for negative-correlation sampling; (2) We design a training-agnostic and model-agnostic reverse-noise pairing framework that requires no architectural modification or retraining; (3) We extend the method to randomized quasi-Monte Carlo estimation, significantly improving statistical inference efficiency. Empirical evaluation on mainstream models—including Stable Diffusion—demonstrates improved image diversity without quality degradation, an average 90% reduction in downstream task confidence interval widths, and systematic gains in uncertainty estimation accuracy—all achieved with zero computational overhead.
📝 Abstract
We initiate a systematic study of antithetic initial noise in diffusion models. Across unconditional models trained on diverse datasets, text-conditioned latent-diffusion models, and diffusion-posterior samplers, we find that pairing each initial noise with its negation consistently yields strongly negatively correlated samples. To explain this phenomenon, we combine experiments and theoretical analysis, leading to a symmetry conjecture that the learned score function is approximately affine antisymmetric (odd symmetry up to a constant shift), and provide evidence supporting it. Leveraging this negative correlation, we enable two applications: (1) enhancing image diversity in models like Stable Diffusion without quality loss, and (2) sharpening uncertainty quantification (e.g., up to 90% narrower confidence intervals) when estimating downstream statistics. Building on these gains, we extend the two-point pairing to a randomized quasi-Monte Carlo estimator, which further improves estimation accuracy. Our framework is training-free, model-agnostic, and adds no runtime overhead.