Squeezed Diffusion Models

📅 2025-08-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Standard diffusion models typically employ isotropic Gaussian noise, neglecting intrinsic data structure and hindering discriminative feature learning. To address this, we propose Squeezed Diffusion Models (SDMs), which perform data-dependent anisotropic noise scaling via principal component analysis (PCA), dynamically modulating noise intensity along principal directions. Inspired by quantum squeezed states, SDMs incorporate a lightweight “anti-squeezing” mechanism that optimally allocates noise under the Heisenberg uncertainty constraint. Compared to conventional isotropic noise injection, SDMs significantly improve generative performance: FID scores decrease by up to 15% on CIFAR-10, CIFAR-100, and CelebA-64; precision-recall curves shift uniformly rightward, indicating concurrent gains in both sample fidelity and diversity. This work pioneers physics-informed noise shaping in diffusion modeling—departing from traditional noise paradigms—and establishes a novel, data-aware framework for generative modeling.

Technology Category

Application Category

📝 Abstract
Diffusion models typically inject isotropic Gaussian noise, disregarding structure in the data. Motivated by the way quantum squeezed states redistribute uncertainty according to the Heisenberg uncertainty principle, we introduce Squeezed Diffusion Models (SDM), which scale noise anisotropically along the principal component of the training distribution. As squeezing enhances the signal-to-noise ratio in physics, we hypothesize that scaling noise in a data-dependent manner can better assist diffusion models in learning important data features. We study two configurations: (i) a Heisenberg diffusion model that compensates the scaling on the principal axis with inverse scaling on orthogonal directions and (ii) a standard SDM variant that scales only the principal axis. Counterintuitively, on CIFAR-10/100 and CelebA-64, mild antisqueezing - i.e. increasing variance on the principal axis - consistently improves FID by up to 15% and shifts the precision-recall frontier toward higher recall. Our results demonstrate that simple, data-aware noise shaping can deliver robust generative gains without architectural changes.
Problem

Research questions and friction points this paper is trying to address.

Anisotropic noise scaling along data principal components
Improving signal-to-noise ratio in diffusion models
Enhancing generative performance without architectural changes
Innovation

Methods, ideas, or system contributions that make the work stand out.

Anisotropic noise scaling along principal components
Heisenberg-inspired diffusion with inverse orthogonal compensation
Data-aware noise shaping without architectural changes
🔎 Similar Papers
No similar papers found.