Quantum Speedups for Markov Chain Monte Carlo Methods with Application to Optimization

๐Ÿ“… 2025-04-04
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
This paper proposes the first quantum-accelerated algorithmic framework with rigorous theoretical guarantees for two fundamental computational problems: (1) Markov chain Monte Carlo (MCMC) sampling from exponential-family distributions (ฯ€ โˆ eโปแถ ), and (2) empirical risk minimization (ERM) over nonsmooth approximately convex functions. Methodologically, it integrates quantum stochastic gradient estimation, quantum Gibbs sampling, and phase estimation to design novel quantum variants of Hamiltonian and Langevin Monte Carlo under a stochastic potential oracle and a two-point joint query model. Key contributions include: (1) the first polynomial quantum speedup in dimension *d*, accuracy *ฮต*, and Lipschitz constant *L*; (2) a two-point synchronous quantum stochastic gradient estimator that substantially reduces gradient/function query complexity; and (3) provable quantum acceleration for nonsmooth ERM, achieving an ฮฉ(โˆš*n*) improvement in convergence rate over classical optimal algorithms.

Technology Category

Application Category

๐Ÿ“ Abstract
We propose quantum algorithms that provide provable speedups for Markov Chain Monte Carlo (MCMC) methods commonly used for sampling from probability distributions of the form $pi propto e^{-f}$, where $f$ is a potential function. Our first approach considers Gibbs sampling for finite-sum potentials in the stochastic setting, employing an oracle that provides gradients of individual functions. In the second setting, we consider access only to a stochastic evaluation oracle, allowing simultaneous queries at two points of the potential function under the same stochastic parameter. By introducing novel techniques for stochastic gradient estimation, our algorithms improve the gradient and evaluation complexities of classical samplers, such as Hamiltonian Monte Carlo (HMC) and Langevin Monte Carlo (LMC) in terms of dimension, precision, and other problem-dependent parameters. Furthermore, we achieve quantum speedups in optimization, particularly for minimizing non-smooth and approximately convex functions that commonly appear in empirical risk minimization problems.
Problem

Research questions and friction points this paper is trying to address.

Quantum speedups for MCMC sampling methods
Improved gradient and evaluation complexities
Quantum optimization for non-smooth convex functions
Innovation

Methods, ideas, or system contributions that make the work stand out.

Quantum Gibbs sampling with gradient oracles
Stochastic gradient estimation via dual-point queries
Dimension-precision optimized quantum MCMC acceleration
๐Ÿ”Ž Similar Papers
No similar papers found.
G
Guneykan Ozgul
Department of Computer Science and Engineering, Pennsylvania State University
X
Xiantao Li
Department of Mathematics, Pennsylvania State University
Mehrdad Mahdavi
Mehrdad Mahdavi
Hartz Family Associate Professor of Computer Science @ Penn State
Machine LearningOptimization TheoryLearning Theory
C
Chunhao Wang
Department of Computer Science and Engineering, Pennsylvania State University