🤖 AI Summary
This work addresses efficient sampling from smooth, strongly log-concave distributions. We propose the first Parallel Randomized Midpoint (PRM) method—a parallelized variant of the randomized midpoint scheme—to accelerate the Langevin Monte Carlo (LMC) algorithm. PRM couples Langevin dynamics with midpoint discretization and enables parallel gradient evaluations, computing multiple candidate points simultaneously per iteration. Theoretically, we introduce a novel Wasserstein-distance analysis technique and establish, for the first time, an explicit dependence of the convergence error bound on parallelism: $W_2(mu_k,pi)leq Ccdot (1-alpha)^k + O(1/sqrt{P})$, where $P$ denotes the degree of parallelism. Empirically, PRM achieves significant wall-clock speedup while preserving rigorous convergence guarantees—demonstrating provably accelerated sampling through parallelization.
📝 Abstract
We study the problem of sampling from a target probability density function in frameworks where parallel evaluations of the log-density gradient are feasible. Focusing on smooth and strongly log-concave densities, we revisit the parallelized randomized midpoint method and investigate its properties using recently developed techniques for analyzing its sequential version. Through these techniques, we derive upper bounds on the Wasserstein distance between sampling and target densities. These bounds quantify the substantial runtime improvements achieved through parallel processing.