🤖 AI Summary
Diffusion probabilistic models (DPMs) typically exhibit implicit and hard-to-control inductive biases. Method: We propose Spectral Anisotropic Gaussian Diffusion (SAGD), which explicitly injects spectral priors into the diffusion process by designing a non-isotropic forward-noise covariance matrix with diagonal structure in the frequency domain. SAGD unifies band masking and power-law weighting, ensuring that the score function converges to the true data score as (t o 0). We theoretically derive the score-matching relationship under non-diagonal covariances and enable selective learning of critical frequency components while omitting corrupted bands. Results: On multiple vision benchmarks, SAGD significantly outperforms standard DPMs, demonstrating that structured forward-noise design is both effective and practical for steering inductive bias.
📝 Abstract
Diffusion Probabilistic Models (DPMs) have achieved strong generative performance, yet their inductive biases remain largely implicit. In this work, we aim to build inductive biases into the training and sampling of diffusion models to better accommodate the target distribution of the data to model. We introduce an anisotropic noise operator that shapes these biases by replacing the isotropic forward covariance with a structured, frequency-diagonal covariance. This operator unifies band-pass masks and power-law weightings, allowing us to emphasize or suppress designated frequency bands, while keeping the forward process Gaussian. We refer to this as spectrally anisotropic Gaussian diffusion (SAGD). In this work, we derive the score relation for anisotropic covariances and show that, under full support, the learned score converges to the true data score as $t! o!0$, while anisotropy reshapes the probability-flow path from noise to data. Empirically, we show the induced anisotropy outperforms standard diffusion across several vision datasets, and enables selective omission: learning while ignoring known corruptions confined to specific bands. Together, these results demonstrate that carefully designed anisotropic forward noise provides a simple, yet principled, handle to tailor inductive bias in DPMs.