Convergence Dynamics of Over-Parameterized Score Matching for a Single Gaussian

📅 2025-11-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper investigates gradient descent dynamics for overparameterized score matching in learning Gaussian and Gaussian mixture distributions. The theoretical analysis centers on how noise scale, model overparameterization, and initialization strategies affect optimization behavior. Our contributions are threefold: (1) We establish, for the first time within the score matching framework, global convergence of gradient descent for Gaussian mixture models with at least three components; (2) We characterize the critical role of noise scale—proving global convergence under large noise, while deriving precise convergence conditions and constructing explicit divergence counterexamples in the low-noise regime; (3) We show that small initialization guarantees parameter convergence, whereas random initialization—though causing divergence of some parameters—yields loss convergence at rate $1/ au$ almost surely, with a matching lower bound. Collectively, these results provide a unified characterization of multiscale optimization dynamics and implicit generalization bias.

Technology Category

Application Category

📝 Abstract
Score matching has become a central training objective in modern generative modeling, particularly in diffusion models, where it is used to learn high-dimensional data distributions through the estimation of score functions. Despite its empirical success, the theoretical understanding of the optimization behavior of score matching, particularly in over-parameterized regimes, remains limited. In this work, we study gradient descent for training over-parameterized models to learn a single Gaussian distribution. Specifically, we use a student model with $n$ learnable parameters and train it on data generated from a single ground-truth Gaussian using the population score matching objective. We analyze the optimization dynamics under multiple regimes. When the noise scale is sufficiently large, we prove a global convergence result for gradient descent. In the low-noise regime, we identify the existence of a stationary point, highlighting the difficulty of proving global convergence in this case. Nevertheless, we show convergence under certain initialization conditions: when the parameters are initialized to be exponentially small, gradient descent ensures convergence of all parameters to the ground truth. We further prove that without the exponentially small initialization, the parameters may not converge to the ground truth. Finally, we consider the case where parameters are randomly initialized from a Gaussian distribution far from the ground truth. We prove that, with high probability, only one parameter converges while the others diverge, yet the loss still converges to zero with a $1/τ$ rate, where $τ$ is the number of iterations. We also establish a nearly matching lower bound on the convergence rate in this regime. This is the first work to establish global convergence guarantees for Gaussian mixtures with at least three components under the score matching framework.
Problem

Research questions and friction points this paper is trying to address.

Analyzes gradient descent dynamics for over-parameterized score matching on a single Gaussian.
Proves global convergence with large noise or small initialization, but divergence risks otherwise.
Shows loss can converge to zero even when most parameters diverge under random initialization.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Over-parameterized student model learns single Gaussian via score matching
Global convergence proven with large noise or exponential small initialization
Random Gaussian initialization leads to single parameter convergence, others diverge
🔎 Similar Papers
No similar papers found.
Y
Yiran Zhang
Institute for Interdisciplinary Information Sciences, Tsinghua University, Beijing 100084, China
W
Weihang Xu
University of Washington, Seattle, WA 98105, USA
M
Mo Zhou
University of Washington, Seattle, WA 98105, USA
Maryam Fazel
Maryam Fazel
Moorthy Family Professor of Electrical and Computer Engineering, University of Washington
OptimizationMachine LearningControlSignal Processing
Simon Shaolei Du
Simon Shaolei Du
Associate Professor, School of Computer Science and Engineering, University of Washington
Machine Learning