🤖 AI Summary
This work addresses the challenge of assessing convergence rates of Markov chain Monte Carlo (MCMC) algorithms. We propose a novel, computable upper bound on the Wasserstein distance between the current and stationary distributions, leveraging Common Random Numbers (CRN) in simulation. Our method is the first to systematically integrate CRN into MCMC convergence diagnostics: by coupling two chains driven by identical random inputs, it yields a tight, analytically tractable upper bound on the convergence rate. Theoretical analysis and empirical evaluation—across canonical Bayesian settings including variance-components models and James–Stein correlation models—demonstrate that the CRN-based bound substantially outperforms classical drift/minorization bounds in both tightness and practicality. Crucially, our bound is both theoretically grounded and computationally feasible, providing the first quantitative convergence criterion for MCMC practitioners. This advances MCMC diagnostics from qualitative heuristics toward rigorous, quantitative assessment.
📝 Abstract
This paper presents how to use common random number (CRN) simulation to evaluate Markov chain Monte Carlo (MCMC) convergence to stationarity. We provide an upper bound on the Wasserstein distance of a Markov chain to its stationary distribution after $N$ steps in terms of averages over CRN simulations. We apply our bound to Gibbs samplers on a variance component model, a model related to James-Stein estimators, and a Bayesian linear regression model. For the former two examples, we show that the CRN simulated bound converges to zero significantly more quickly compared to available drift and minorization bounds.