Generalization Dynamics of Linear Diffusion Models

📅 2025-05-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the theoretical gap in understanding how diffusion models transition from memorizing training samples to generalizing to the true data distribution. Through rigorous analysis of linear diffusion models, it establishes the first analytical characterization of the memory-to-generalization phase transition: a critical threshold occurs when the sample size $N$ scales comparably with the input dimension $d$ ($N asymp d$); for $N > d$, the KL divergence converges linearly at rate $d/N$, independent of the underlying data distribution. The analysis employs a linear denoiser framework, explicitly deriving both test error and the sampling distribution, and provides a precise theoretical quantification of the KL divergence. Key contributions are: (1) identifying sample complexity as the decisive factor governing generalization performance; and (2) offering the first theoretical justification for regularization and early stopping as effective anti-overfitting mechanisms in the small-sample regime.

Technology Category

Application Category

📝 Abstract
Diffusion models trained on finite datasets with $N$ samples from a target distribution exhibit a transition from memorisation, where the model reproduces training examples, to generalisation, where it produces novel samples that reflect the underlying data distribution. Understanding this transition is key to characterising the sample efficiency and reliability of generative models, but our theoretical understanding of this transition is incomplete. Here, we analytically study the memorisation-to-generalisation transition in a simple model using linear denoisers, which allow explicit computation of test errors, sampling distributions, and Kullback-Leibler divergences between samples and target distribution. Using these measures, we predict that this transition occurs roughly when $N asymp d$, the dimension of the inputs. When $N$ is smaller than the dimension of the inputs $d$, so that only a fraction of relevant directions of variation are present in the training data, we demonstrate how both regularization and early stopping help to prevent overfitting. For $N>d$, we find that the sampling distributions of linear diffusion models approach their optimum (measured by the Kullback-Leibler divergence) linearly with $d/N$, independent of the specifics of the data distribution. Our work clarifies how sample complexity governs generalisation in a simple model of diffusion-based generative models and provides insight into the training dynamics of linear denoisers.
Problem

Research questions and friction points this paper is trying to address.

Analyzing memorization-to-generalization transition in linear diffusion models
Determining sample efficiency threshold when N ≈ d for generalization
Exploring regularization and early stopping effects on overfitting prevention
Innovation

Methods, ideas, or system contributions that make the work stand out.

Linear denoisers enable explicit computation of test errors
Transition occurs when sample size matches input dimension
Regularization and early stopping prevent overfitting effectively
🔎 Similar Papers
No similar papers found.