Faster parallel MCMC: Metropolis adjustment is best served warm

📅 2026-01-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the slow convergence of parallel Markov chain Monte Carlo (MCMC) methods during the cold-start phase, which limits their ability to leverage multi-chain parallelism effectively. To overcome this challenge, the authors propose the Late Adjusted Parallel Sampler (LAPS), a two-stage parallel MCMC algorithm that requires no manual hyperparameter tuning. LAPS first employs a rapid warm-up phase without Metropolis adjustment to efficiently approximate the target distribution, then automatically triggers Metropolis–Hastings correction and step-size adaptation based on an adaptive bias estimator to ensure asymptotic correctness. Empirical evaluations on standard benchmarks demonstrate that LAPS significantly outperforms state-of-the-art methods such as MEADS, ChESS, and Pathfinder, and achieves nearly two orders of magnitude speedup over serial algorithms like NUTS while maintaining sampling accuracy.

Technology Category

Application Category

📝 Abstract
Despite the enormous success of Hamiltonian Monte Carlo and related Markov Chain Monte Carlo (MCMC) methods, sampling often still represents the computational bottleneck in scientific applications. Availability of parallel resources can significantly speed up MCMC inference by running a large number of chains in parallel, each collecting a single sample. However, the parallel approach converges slowly if the chains are not initialized close to the target distribution (cold start). Theoretically this can be resolved by initially running MCMC without Metropolis-Hastings adjustment to quickly converge to the vicinity of the target distribution and then turn on adjustment to achieve fine convergence. However, no practical scheme uses this strategy, due to the difficulty of automatically selecting the step size during the unadjusted phase. We here develop Late Adjusted Parallel Sampler (LAPS), which is precisely such a scheme and is applicable out of the box, all the hyperparameters are selected automatically. LAPS takes advantage of ensemble-based hyperparameter adaptation to estimate the bias at each iteration and converts it to the appropriate step size. We show that LAPS consistently and significantly outperforms ensemble adjusted methods such as MEADS or ChESS and the optimization-based initializer Pathfinder on a variety of standard benchmark problems. LAPS typically achieves two orders of magnitude lower wall-clock time than the corresponding sequential algorithms such as NUTS.
Problem

Research questions and friction points this paper is trying to address.

parallel MCMC
cold start
convergence speed
Metropolis-Hastings adjustment
sampling efficiency
Innovation

Methods, ideas, or system contributions that make the work stand out.

parallel MCMC
Metropolis adjustment
ensemble adaptation
automatic step-size selection
LAPS
🔎 Similar Papers
No similar papers found.
Jakob Robnik
Jakob Robnik
Graduate student at UC Berkeley
Computational StatisticsBayesian InferenceAstrostatisticsExoplanets
U
Uroš Seljak
Physics Department, University of California at Berkeley and Lawrence Berkeley National Laboratory, Berkeley, CA 94720, USA