Incorporating the ChEES Criterion into Sequential Monte Carlo Samplers

📅 2025-04-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the low computational efficiency and poor GPU compatibility of sequential Monte Carlo (SMC) samplers in nonparametric Bayesian inference. We propose the first integration of ChEES-HMC—a gradient-based, step-size-adaptive variant of Hamiltonian Monte Carlo—into the SMC framework as the proposal mechanism. This design simultaneously enhances exploration efficiency in high-dimensional posterior spaces and enables fine-grained GPU-level parallelization, overcoming throughput and hardware scalability limitations inherent to the standard No-U-Turn Sampler (NUTS). Empirical evaluation across multiple benchmark tasks demonstrates that our method achieves sampling quality comparable to NUTS while reducing per-iteration runtime by 40–65%. Moreover, it delivers substantially improved GPU speedup, exhibiting superior scalability and computational efficiency—particularly on large-scale nonparametric models such as Dirichlet Process Mixture Models (DPMM) and Hierarchical Dirichlet Processes (HDP).

Technology Category

Application Category

📝 Abstract
Markov chain Monte Carlo (MCMC) methods are a powerful but computationally expensive way of performing non-parametric Bayesian inference. MCMC proposals which utilise gradients, such as Hamiltonian Monte Carlo (HMC), can better explore the parameter space of interest if the additional hyper-parameters are chosen well. The No-U-Turn Sampler (NUTS) is a variant of HMC which is extremely effective at selecting these hyper-parameters but is slow to run and is not suited to GPU architectures. An alternative to NUTS, Change in the Estimator of the Expected Square HMC (ChEES-HMC) was shown not only to run faster than NUTS on GPU but also sample from posteriors more efficiently. Sequential Monte Carlo (SMC) samplers are another sampling method which instead output weighted samples from the posterior. They are very amenable to parallelisation and therefore being run on GPUs while having additional flexibility in their choice of proposal over MCMC. We incorporate (ChEEs-HMC) as a proposal into SMC samplers and demonstrate competitive but faster performance than NUTS on a number of tasks.
Problem

Research questions and friction points this paper is trying to address.

Improving MCMC efficiency with gradient-based proposals
Enhancing GPU-compatible sampling via ChEES-HMC in SMC
Comparing ChEES-SMC performance against NUTS for speed
Innovation

Methods, ideas, or system contributions that make the work stand out.

Integrates ChEES-HMC into SMC samplers
Enables faster GPU-compatible posterior sampling
Combines parallelization with efficient proposal choice
🔎 Similar Papers
No similar papers found.