Hamiltonian Monte Carlo Inference of Marginalized Linear Mixed-Effects Models

๐Ÿ“… 2024-10-31
๐Ÿ›๏ธ Neural Information Processing Systems
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
In Bayesian inference, linear mixed-effects models (LMMs) typically rely on Markov chain Monte Carlo (MCMC) sampling; explicit modeling of random effects induces cubic computational complexityโ€”O(nยณ)โ€”in Hamiltonian Monte Carlo (HMC). While manual marginalization of random effects reduces complexity to O(n), it is error-prone and burdensome for users. This paper introduces the first fully automated analytical marginalization algorithm for LMMs, integrating HMC with the matrix inversion lemma, Woodbury identity, and block-wise linear algebra to enable automatic, exact, and lossless marginalization of random effects. The method reduces the core HMC computational cost to O(n) and provides a theoretical guarantee of strict efficiency improvement across all applicable settings. Empirical evaluation on real-world cognitive science models demonstrates substantially faster convergence, markedly increased effective sample size, and zero requirement for user intervention or reparameterization.

Technology Category

Application Category

๐Ÿ“ Abstract
Bayesian reasoning in linear mixed-effects models (LMMs) is challenging and often requires advanced sampling techniques like Markov chain Monte Carlo (MCMC). A common approach is to write the model in a probabilistic programming language and then sample via Hamiltonian Monte Carlo (HMC). However, there are many ways a user can transform a model that make inference more or less efficient. In particular, marginalizing some variables can greatly improve inference but is difficult for users to do manually. We develop an algorithm to easily marginalize random effects in LMMs. A naive approach introduces cubic time operations within an inference algorithm like HMC, but we reduce the running time to linear using fast linear algebra techniques. We show that marginalization is always beneficial when applicable and highlight improvements in various models, especially ones from cognitive sciences.
Problem

Research questions and friction points this paper is trying to address.

Improving Bayesian inference efficiency in linear mixed-effects models
Automating marginalization of random effects to enhance HMC performance
Reducing computational complexity from cubic to linear time
Innovation

Methods, ideas, or system contributions that make the work stand out.

Algorithm for marginalizing random effects in LMMs
Linear time using fast linear algebra
Improves inference efficiency in cognitive science models
๐Ÿ”Ž Similar Papers
No similar papers found.