π€ AI Summary
This work addresses the contextual bandit problem driven by a hidden Markov model (HMM), where existing approaches rely on restrictive simplifying assumptions and lack high-probability regret guarantees. The authors propose a more natural linear contextual bandit framework in which rewards depend jointly on the latent state and observed context, and introduce a fully adaptive policy that estimates HMM parameters online without imposing linear approximations on posterior state probabilities. For the first time, they establish a high-probability regret bound under genuine dependence on the true hidden states, which is independent of the reward function and depends solely on the error in HMM parameter estimation, thereby avoiding additional modeling assumptions such as reward gaps.
π Abstract
We revisit the finite-armed linear bandit model by Nelson et al. (2022), where contexts and rewards are governed by a finite hidden Markov chain. Nelson et al. (2022) approach this model by a reduction to linear contextual bandits; but to do so, they actually introduce a simplification in which rewards are linear functions of the posterior probabilities over the hidden states given the observed contexts, rather than functions of the hidden states themselves. Their analysis (but not their algorithm) also does not take into account the estimation of the HMM parameters, and only tackles expected, not high-probability, bounds, which suffer in addition from unnecessary complex dependencies on the model (like reward gaps). We instead study the more natural model incorporating direct dependencies in the hidden states (on top of dependencies on the observed contexts, as is natural for contextual bandits) and also obtain stronger, high-probability, regret bounds for a fully adaptive strategy that estimates HMM parameters online. These bounds do not depend on the reward functions and only depend on the model through the estimation of the HMM parameters.