🤖 AI Summary
This paper studies the online multi-expert selection problem under a budget constraint: in each round, one predictor is selected from $K$ adaptive experts, but at most $M$ ($M leq K$) experts may be updated. For this setting, we establish, for the first time, a regret bound valid at any time horizon. We propose M-LCB, a meta-algorithm that constructs confidence intervals based on observed losses—naturally capturing expert convergence without auxiliary optimization. Inspired by UCB, its interval construction accommodates both parametric online learning and bandit-style experts. If each expert incurs internal regret $ ilde{O}(T^alpha)$, the overall regret is bounded by $ ilde{O}ig(sqrt{KT/M} + (K/M)^{1-alpha}T^alphaig)$, improving upon prior results. The framework applies to online model selection and financial strategy switching.
📝 Abstract
In many modern applications, a system must dynamically choose between several adaptive learning algorithms that are trained online. Examples include model selection in streaming environments, switching between trading strategies in finance, and orchestrating multiple contextual bandit or reinforcement learning agents. At each round, a learner must select one predictor among $K$ adaptive experts to make a prediction, while being able to update at most $M le K$ of them under a fixed training budget.
We address this problem in the emph{stochastic setting} and introduce algname{M-LCB}, a computationally efficient UCB-style meta-algorithm that provides emph{anytime regret guarantees}. Its confidence intervals are built directly from realized losses, require no additional optimization, and seamlessly reflect the convergence properties of the underlying experts. If each expert achieves internal regret $ ilde O(T^α)$, then algname{M-LCB} ensures overall regret bounded by $ ilde O!Bigl(sqrt{ frac{KT}{M}} ;+; (K/M)^{1-α},T^αBigr)$.
To our knowledge, this is the first result establishing regret guarantees when multiple adaptive experts are trained simultaneously under per-round budget constraints. We illustrate the framework with two representative cases: (i) parametric models trained online with stochastic losses, and (ii) experts that are themselves multi-armed bandit algorithms. These examples highlight how algname{M-LCB} extends the classical bandit paradigm to the more realistic scenario of coordinating stateful, self-learning experts under limited resources.