🤖 AI Summary
On high-dimensional manifolds, empirical diffusion models suffer from memorization—merely reproducing training samples without generating novel ones. To address this, we propose a post-hoc inertial update at the final denoising step, requiring no retraining. We provide the first rigorous proof that this mechanism completely eliminates memorization. Our method constructs the inertial step via the score function and leverages Gaussian kernel density estimation on the manifold coupled with Wasserstein-1 distance analysis. Theoretically, the generated distribution converges to the true manifold distribution at the optimal rate $O(n^{-2/(d+4)})$, where $d$ is the intrinsic manifold dimension—bypassing the curse of ambient dimensionality. Empirically, all generated samples lie strictly outside the original $n$ training points. This work uncovers a fundamental connection between diffusion modeling and manifold learning, establishing a new paradigm for efficient, training-free generative modeling.
📝 Abstract
Diffusion models is a popular computational tool to generate new data samples. It utilizes a forward diffusion process that add noise to the data distribution and then use a reverse process to remove noises to produce samples from the data distribution. However, when the empirical data distribution consists of $n$ data point, using the empirical diffusion model will necessarily produce one of the existing data points. This is often referred to as the memorization effect, which is usually resolved by sophisticated machine learning procedures in the current literature. This work shows that the memorization problem can be resolved by a simple inertia update step at the end of the empirical diffusion model simulation. Our inertial diffusion model requires only the empirical diffusion model score function and it does not require any further training. We show that choosing the inertia diffusion model sample distribution is an $Oleft(n^{-frac{2}{d+4}}
ight)$ Wasserstein-1 approximation of a data distribution lying on a $C^2$ manifold of dimension $d$. Since this estimate is significant smaller the Wasserstein1 distance between population and empirical distributions, it rigorously shows the inertial diffusion model produces new data samples. Remarkably, this upper bound is completely free of the ambient space dimension, since there is no training involved. Our analysis utilizes the fact that the inertial diffusion model samples are approximately distributed as the Gaussian kernel density estimator on the manifold. This reveals an interesting connection between diffusion model and manifold learning.