🤖 AI Summary
This study addresses the problem of online model selection within the Bayesian multi-armed bandit framework, aiming to adaptively explore multiple base learners and compete post-hoc with the best one. To this end, the work proposes a novel Bayesian algorithm that dynamically balances exploration and exploitation to efficiently track the optimal base learner in stochastic environments. It provides the first oracle-style Bayesian regret guarantee for Bayesian online model selection and systematically demonstrates that data sharing mechanisms can effectively mitigate the adverse effects of prior misspecification. Theoretical analysis establishes a Bayesian regret bound of $O(dM\sqrt{T} + \sqrt{MT})$, where $d$ denotes the dimensionality, $M$ the number of base learners, and $T$ the time horizon. Empirical evaluations confirm that the algorithm matches the performance of the best base learner across diverse experimental settings.
📝 Abstract
Online model selection in Bayesian bandits raises a fundamental exploration challenge: When an environment instance is sampled from a prior distribution, how can we design an adaptive strategy that explores multiple bandit learners and competes with the best one in hindsight? We address this problem by introducing a new Bayesian algorithm for online model selection in stochastic bandits. We prove an oracle-style guarantee of $O\left( d^* M \sqrt{T} + \sqrt{(MT)} \right)$ on the Bayesian regret, where $M$ is the number of base learners, $d^*$ is the regret coefficient of the optimal base learner, and $T$ is the time horizon. We also validate our method empirically across a range of stochastic bandit settings, demonstrating performance that is competitive with the best base learner. Additionally, we study the effect of sharing data among base learners and its role in mitigating prior mis-specification.