🤖 AI Summary
To address suboptimal personalized interventions in mobile health (mHealth) arising from individual heterogeneity, environmental non-stationarity, and nonlinear reward structures, this paper proposes a robust mixed-effects contextual bandit framework. Methodologically, it introduces a novel tripartite mechanism integrating user- and time-specific random effects modeling, network-structure consistency regularization, and debiased machine learning (DML). Theoretically, it establishes, for the first time, a high-probability regret bound relying solely on the dimensionality of reward differences—ensuring strong robustness against complex baseline reward functions. Empirical evaluation demonstrates significant improvements over state-of-the-art methods in both synthetic experiments and two offline policy evaluations, validating the framework’s effectiveness, stability, and scalability in real-world mHealth applications.
📝 Abstract
Mobile health leverages personalized and contextually tailored interventions optimized through bandit and reinforcement learning algorithms. In practice, however, challenges such as participant heterogeneity, nonstationarity, and nonlinear relationships hinder algorithm performance. We propose RoME, a Robust Mixed-Effects contextual bandit algorithm that simultaneously addresses these challenges via (1) modeling the differential reward with user- and time-specific random effects, (2) network cohesion penalties, and (3) debiased machine learning for flexible estimation of baseline rewards. We establish a high-probability regret bound that depends solely on the dimension of the differential-reward model, enabling us to achieve robust regret bounds even when the baseline reward is highly complex. We demonstrate the superior performance of the RoME algorithm in a simulation and two off-policy evaluation studies.