🤖 AI Summary
Diffusion models achieve high generation quality but suffer from overfitting of the empirical score function to training data, leading to memorization—where reverse SDE samples collapse to training points. To address this, we propose kernel-smoothed empirical scores, explicitly balancing bias and variance to suppress memorization and improve generalization. Theoretically, we (i) establish the first asymptotic connection between the covariance of the empirical score and the principal components of the data; (ii) reveal dual regularization effects—via Gaussian diffusion and kernel smoothing in score space; and (iii) introduce the Log-Exponential Double-KDE (LED-KDE) framework, deriving an asymptotic upper bound on the KL divergence between the generated and target distributions. Experiments on synthetic data and MNIST demonstrate significant mitigation of memorization, with simultaneous improvements in sample diversity and distribution fidelity—achieved without additional learnable parameters.
📝 Abstract
Diffusion models now set the benchmark in high-fidelity generative sampling, yet they can, in principle, be prone to memorization. In this case, their learned score overfits the finite dataset so that the reverse-time SDE samples are mostly training points. In this paper, we interpret the empirical score as a noisy version of the true score and show that its covariance matrix is asymptotically a re-weighted data PCA. In large dimension, the small time limit makes the noise variance blow up while simultaneously reducing spatial correlation. To reduce this variance, we introduce a kernel-smoothed empirical score and analyze its bias-variance trade-off. We derive asymptotic bounds on the Kullback-Leibler divergence between the true distribution and the one generated by the modified reverse SDE. Regularization on the score has the same effect as increasing the size of the training dataset, and thus helps prevent memorization. A spectral decomposition of the forward diffusion suggests better variance control under some regularity conditions of the true data distribution. Reverse diffusion with kernel-smoothed empirical score can be reformulated as a gradient descent drifted toward a Log-Exponential Double-Kernel Density Estimator (LED-KDE). This perspective highlights two regularization mechanisms taking place in denoising diffusions: an initial Gaussian kernel first diffuses mass isotropically in the ambient space, while a second kernel applied in score space concentrates and spreads that mass along the data manifold. Hence, even a straightforward regularization-without any learning-already mitigates memorization and enhances generalization. Numerically, we illustrate our results with several experiments on synthetic and MNIST datasets.