Learning What Matters: Steering Diffusion via Spectrally Anisotropic Forward Noise

📅 2025-10-07
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Diffusion probabilistic models (DPMs) typically exhibit implicit and hard-to-control inductive biases. Method: We propose Spectral Anisotropic Gaussian Diffusion (SAGD), which explicitly injects spectral priors into the diffusion process by designing a non-isotropic forward-noise covariance matrix with diagonal structure in the frequency domain. SAGD unifies band masking and power-law weighting, ensuring that the score function converges to the true data score as (t o 0). We theoretically derive the score-matching relationship under non-diagonal covariances and enable selective learning of critical frequency components while omitting corrupted bands. Results: On multiple vision benchmarks, SAGD significantly outperforms standard DPMs, demonstrating that structured forward-noise design is both effective and practical for steering inductive bias.

Technology Category

Application Category

📝 Abstract
Diffusion Probabilistic Models (DPMs) have achieved strong generative performance, yet their inductive biases remain largely implicit. In this work, we aim to build inductive biases into the training and sampling of diffusion models to better accommodate the target distribution of the data to model. We introduce an anisotropic noise operator that shapes these biases by replacing the isotropic forward covariance with a structured, frequency-diagonal covariance. This operator unifies band-pass masks and power-law weightings, allowing us to emphasize or suppress designated frequency bands, while keeping the forward process Gaussian. We refer to this as spectrally anisotropic Gaussian diffusion (SAGD). In this work, we derive the score relation for anisotropic covariances and show that, under full support, the learned score converges to the true data score as $t! o!0$, while anisotropy reshapes the probability-flow path from noise to data. Empirically, we show the induced anisotropy outperforms standard diffusion across several vision datasets, and enables selective omission: learning while ignoring known corruptions confined to specific bands. Together, these results demonstrate that carefully designed anisotropic forward noise provides a simple, yet principled, handle to tailor inductive bias in DPMs.
Problem

Research questions and friction points this paper is trying to address.

Introducing anisotropic noise to shape diffusion model biases
Deriving score relations for anisotropic covariances in diffusion models
Enabling selective frequency learning and corruption omission in DPMs
Innovation

Methods, ideas, or system contributions that make the work stand out.

Introducing anisotropic noise operator for structured covariance
Using spectrally anisotropic Gaussian diffusion process
Shaping inductive biases via frequency-band emphasis and suppression
🔎 Similar Papers
No similar papers found.