🤖 AI Summary
This paper investigates gradient descent dynamics for overparameterized score matching in learning Gaussian and Gaussian mixture distributions. The theoretical analysis centers on how noise scale, model overparameterization, and initialization strategies affect optimization behavior. Our contributions are threefold: (1) We establish, for the first time within the score matching framework, global convergence of gradient descent for Gaussian mixture models with at least three components; (2) We characterize the critical role of noise scale—proving global convergence under large noise, while deriving precise convergence conditions and constructing explicit divergence counterexamples in the low-noise regime; (3) We show that small initialization guarantees parameter convergence, whereas random initialization—though causing divergence of some parameters—yields loss convergence at rate $1/ au$ almost surely, with a matching lower bound. Collectively, these results provide a unified characterization of multiscale optimization dynamics and implicit generalization bias.
📝 Abstract
Score matching has become a central training objective in modern generative modeling, particularly in diffusion models, where it is used to learn high-dimensional data distributions through the estimation of score functions. Despite its empirical success, the theoretical understanding of the optimization behavior of score matching, particularly in over-parameterized regimes, remains limited. In this work, we study gradient descent for training over-parameterized models to learn a single Gaussian distribution. Specifically, we use a student model with $n$ learnable parameters and train it on data generated from a single ground-truth Gaussian using the population score matching objective. We analyze the optimization dynamics under multiple regimes. When the noise scale is sufficiently large, we prove a global convergence result for gradient descent. In the low-noise regime, we identify the existence of a stationary point, highlighting the difficulty of proving global convergence in this case. Nevertheless, we show convergence under certain initialization conditions: when the parameters are initialized to be exponentially small, gradient descent ensures convergence of all parameters to the ground truth. We further prove that without the exponentially small initialization, the parameters may not converge to the ground truth. Finally, we consider the case where parameters are randomly initialized from a Gaussian distribution far from the ground truth. We prove that, with high probability, only one parameter converges while the others diverge, yet the loss still converges to zero with a $1/τ$ rate, where $τ$ is the number of iterations. We also establish a nearly matching lower bound on the convergence rate in this regime. This is the first work to establish global convergence guarantees for Gaussian mixtures with at least three components under the score matching framework.