🤖 AI Summary
This work proposes a novel posterior reparameterization strategy to address the scalability limitations of traditional Bayesian envelope model inference methods—such as MCMC and ADVI—which often suffer from ill-conditioned posteriors, slow mixing, or high computational costs. By embedding a Laplace approximation within a coordinate ascent variational inference (CAVI) framework, the approach effectively handles non-conjugate likelihoods and mitigates numerical instability. Theoretical analysis demonstrates that the per-iteration update error of the proposed method vanishes asymptotically, ensuring statistical accuracy while substantially improving computational efficiency. Empirical evaluations on both simulated and real-world datasets confirm that the method achieves estimation accuracy and model selection performance comparable to existing approaches, yet at a significantly reduced computational cost.
📝 Abstract
Envelope models provide a sufficient dimension reduction framework for multivariate regression analysis. Bayesian inference for these models has been developed primarily using Markov chain Monte Carlo (MCMC) methods. Specifically, Gibbs sampling and Metropolis-Hastings algorithms suffer from slow mixing and high computational cost. Although automatic differentiation variational inference (ADVI) has been explored for Bayesian envelope models, the resulting gradient-based optimization is often numerically unstable due to severe ill-conditioning of the posterior distribution. To address this issue, we propose a novel reparameterization of the posterior distribution that alleviates the ill-conditioning inherent in conventional variational approaches. Building on this reparameterization, we develop an efficient variational inference procedure. Since the resulting likelihood remains nonconjugate, we approximate the corresponding variational factor using a Laplace approximation within a coordinate-ascent variational inference (CAVI) framework. We establish theoretical results showing that, at each one-step coordinate update, the Laplace approximation error relative to the exact variational inference coordinate update converges to zero. Simulation studies and a real-data analysis demonstrate that the proposed method substantially improves computational efficiency while maintaining estimation accuracy and model-selection performance relative to existing approaches.