🤖 AI Summary
This work investigates the stability, forgetting properties, and error propagation mechanisms of score-based generative models during long-time sampling. By analyzing the Markov chain associated with the reverse diffusion process, the authors develop a theoretical framework that integrates Lyapunov drift conditions with Doeblin-type minorization conditions to characterize the propagation of initialization and discretization errors. Under relatively mild assumptions, they establish quantitative bounds on sampling error, revealing the contractive nature and stochastic dynamical essence of the reverse diffusion process. These results provide rigorous stability guarantees and solid theoretical foundations for score-based generative models.
📝 Abstract
Understanding the stability and long-time behavior of generative models is a fundamental problem in modern machine learning. This paper provides quantitative bounds on the sampling error of score-based generative models by leveraging stability and forgetting properties of the Markov chain associated with the reverse-time dynamics. Under weak assumptions, we provide the two structural properties to ensure the propagation of initialization and discretization errors of the backward process: a Lyapunov drift condition and a Doeblin-type minorization condition. A practical consequence is quantitative stability of the sampling procedure, as the reverse diffusion dynamics induces a contraction mechanism along the sampling trajectory. Our results clarify the role of stochastic dynamics in score-based models and provide a principled framework for analyzing propagation of errors in such approaches.