On the Convergence of Wasserstein Gradient Descent for Sampling

📅 2026-02-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work proposes a nonparametric sampling method based on Wasserstein gradient descent to address the limitations of traditional Markov chain Monte Carlo (MCMC) and variational inference in high-dimensional or multimodal distributions. By directly optimizing the Kullback–Leibler (KL) functional over the space of probability measures and leveraging particle approximations together with score matching, the approach enables efficient sampling. Theoretically, this study establishes, for the first time, convergence guarantees for Wasserstein gradient descent within two important subclasses of probability measures, thereby providing a novel theoretical foundation for optimization-driven sampling. Empirical results demonstrate that the proposed method significantly outperforms standard MCMC and parametric variational Bayesian approaches in both high-dimensional and multimodal settings.

Technology Category

Application Category

📝 Abstract
This paper studies the optimization of the KL functional on the Wasserstein space of probability measures, and develops a sampling framework based on Wasserstein gradient descent (WGD). We identify two important subclasses of the Wasserstein space for which the WGD scheme is guaranteed to converge, thereby providing new theoretical foundations for optimization-based sampling methods on measure spaces. For practical implementation, we construct a particle-based WGD algorithm in which the score function is estimated via score matching. Through a series of numerical experiments, we demonstrate that WGD can provide good approximation to a variety of complex target distributions, including those that pose substantial challenges for standard MCMC and parametric variational Bayes methods. These results suggest that WGD offers a promising and flexible alternative for scalable Bayesian inference in high-dimensional or multimodal settings.
Problem

Research questions and friction points this paper is trying to address.

Wasserstein gradient descent
KL functional
sampling
probability measures
Bayesian inference
Innovation

Methods, ideas, or system contributions that make the work stand out.

Wasserstein gradient descent
convergence analysis
score matching
optimization-based sampling
Bayesian inference
🔎 Similar Papers
No similar papers found.
V
Van Chien Ta
VNU University of Science, Hanoi, Vietnam
T
Thi Mai Hong Chu
Vin University, Hanoi, Vietnam
Minh-Ngoc Tran
Minh-Ngoc Tran
Associate Professor, University of Sydney
Bayesian ComputationStatistical Machine LearningFinancial EconometricsExperimental Psychology