π€ AI Summary
This paper identifies a critical flaw in the original Dirichlet-Laplace (DL) prior MCMC algorithm: an incorrect sampling order that prevents valid draws from the joint posterior distribution of latent variables. To rectify this, we propose two theoretically consistent improvements: (i) correcting the sampling sequence in the original algorithm, and (ii) designing a new algorithm based on an equivalent prior structure, incorporating conditional distribution reparameterization for Bayesian inference. Both methods guarantee asymptotically unbiased sampling from the exact joint posterior, fully eliminating the theoretical bias inherent in the original implementation, while preserving the DL priorβs desirable sparsity-inducing and shrinkage properties. Empirical evaluations confirm convergence and statistical validity across diverse settings. Our work establishes a rigorous computational foundation for DL-based Bayesian inference in conditionally Gaussian models.
π Abstract
Bhattacharya et al. (2015, Journal of the American Statistical Association 110(512): 1479-1490) introduce a novel prior, the Dirichlet-Laplace (DL) prior, and propose a Markov chain Monte Carlo (MCMC) method to simulate posterior draws under this prior in a conditionally Gaussian setting. The original algorithm samples from conditional distributions in the wrong order, i.e., it does not correctly sample from the joint posterior distribution of all latent variables. This note details the issue and provides two simple solutions: A correction to the original algorithm and a new algorithm based on an alternative, yet equivalent, formulation of the prior. This corrigendum does not affect the theoretical results in Bhattacharya et al. (2015).