🤖 AI Summary
This work addresses the slow convergence of Hamiltonian Monte Carlo (HMC) in sampling from high-dimensional continuous probability distributions, which stems from the absence of an efficient warm-start mechanism. The authors propose a two-stage strategy: first employing non-Metropolized HMC to rapidly generate high-quality initial points, then switching to Metropolized HMC for accurate sampling. Under strong log-concavity and third-derivative smoothness assumptions, this approach achieves a warm-start with only $\widetilde{O}(d^{1/4})$ iterations—the first method to close the theoretical gap in dimension dependence for HMC in this setting. Consequently, the overall sampling complexity reaches $\widetilde{O}(d^{1/4})$, significantly improving upon the previous best bound of $\widetilde{O}(d^{1/2})$, and offering a simple yet effective warm-start scheme for practical HMC implementations.
📝 Abstract
Generating samples from a continuous probability density is a central algorithmic problem across statistics, engineering, and the sciences. For high-dimensional settings, Hamiltonian Monte Carlo (HMC) is the default algorithm across mainstream software packages. However, despite the extensive line of work on HMC and its widespread empirical success, it remains unclear how many iterations of HMC are required as a function of the dimension $d$. On one hand, a variety of results show that Metropolized HMC converges in $O(d^{1/4})$ iterations from a warm start close to stationarity. On the other hand, Metropolized HMC is significantly slower without a warm start, e.g., requiring $Ω(d^{1/2})$ iterations even for simple target distributions such as isotropic Gaussians. Finding a warm start is therefore the computational bottleneck for HMC.
We resolve this issue for the well-studied setting of sampling from a probability distribution satisfying strong log-concavity (or isoperimetry) and third-order derivative bounds. We prove that \emph{non-Metropolized} HMC generates a warm start in $\tilde{O}(d^{1/4})$ iterations, after which we can exploit the warm start using Metropolized HMC. Our final complexity of $\tilde{O}(d^{1/4})$ is the fastest algorithm for high-accuracy sampling under these assumptions, improving over the prior best of $\tilde{O}(d^{1/2})$. This closes the long line of work on the dimensional complexity of MHMC for such settings, and also provides a simple warm-start prescription for practical implementations.