π€ AI Summary
This study investigates the robustness of Langevin dynamics under estimation errors in the score function. Despite the fact that the L^p error of the estimated score can be made arbitrarily small, we provide the first rigorous proof that, in high-dimensional settings, the distribution generated by Langevin dynamics may still deviate significantly from the target distribution in total variation distanceβa discrepancy that cannot be eliminated within polynomial time. By integrating tools from probability theory, high-dimensional statistics, and L^p error analysis, our work uncovers the intrinsic sensitivity of Langevin-based sampling to score estimation inaccuracies, thereby challenging its practical reliability and offering theoretical justification for the empirical superiority of diffusion models over Langevin methods.
π Abstract
We consider the robustness of score-based generative modeling to errors in the estimate of the score function. In particular, we show that Langevin dynamics is not robust to the L^2 errors (more generally L^p errors) in the estimate of the score function. It is well-established that with small L^2 errors in the estimate of the score function, diffusion models can sample faithfully from the target distribution under fairly mild regularity assumptions in a polynomial time horizon. In contrast, our work shows that even for simple distributions in high dimensions, Langevin dynamics run for any polynomial time horizon will produce a distribution far from the target distribution in Total Variation (TV) distance, even when the L^2 error (more generally L^p) of the estimate of the score function is arbitrarily small. Considering such an error in the estimate of the score function is unavoidable in practice when learning the score function from data, our results provide further justification for diffusion models over Langevin dynamics and serve to caution against the use of Langevin dynamics with estimated scores.