🤖 AI Summary
This study addresses the challenge of modeling dual heterogeneity in longitudinal trajectories arising from both fixed and random effects, which traditional linear mixed-effects models (LMMs) struggle to capture effectively. To this end, the authors propose a Mixture of Experts for Mixed-Effects models (MEMoE), which uniquely integrates the mixture-of-experts architecture with LMMs by assigning each expert to a full LMM representing a distinct latent subgroup. A gating function, driven by baseline covariates, probabilistically allocates individuals to these subgroups. Parameter estimation is performed via a Laplace-approximated EM algorithm, complemented by a robust sandwich estimator to adjust standard errors and enhance inference reliability. Empirical evaluations demonstrate that MEMoE significantly outperforms conventional single-population LMMs and standard mixture-of-experts models in terms of parameter recovery, classification accuracy, and overall model fit.
📝 Abstract
Linear mixed-effects model (LMM) is a cornerstone of longitudinal data analysis, but is limited to adeptly make heterogeneous analyses predictable under both group-specific fixed effects and subject-specific random effects. To address this challenge, we propose a novel statistical framework by using a large model prototype: a mixed effects mixture of experts model (MEMoE). This framework integrates the divide-and-conquer paradigm of Mixture of Experts Models with classical mixed-effect modeling. In the proposed MEMoE, each expert is a full LMM dedicated to capturing the longitudinal trajectory of a specific latent subpopulation, while another model gating function learns to route subjects to the most appropriate expert in a data-driven manner based on baseline covariates. We develop a robust inferential procedure for parameter estimation based on the Laplace Expectation-Maximization algorithm, with standard errors calibrated using robust sandwich estimators to account for potential model misspecification. Extensive simulation studies and an empirical application demonstrate that MEMoE outperforms both traditional single-population LMM and conventional Mixture of Experts models in terms of parameter recovery, classification accuracy, and overall model fit.