🤖 AI Summary
This study addresses the multi-armed bandit problem with rewards that increase with usage frequency—termed Rising MAB—under a known time budget. Recognizing that optimal policies are highly sensitive to the available budget, the authors propose the CURE-UCB algorithm, which explicitly incorporates horizon information into the construction of upper confidence bounds to guide action selection. Theoretical analysis yields a tighter regret upper bound, demonstrating that CURE-UCB strictly outperforms existing strategies that ignore time budget constraints, particularly in structured environments such as those with linear post-plateau reward dynamics. Empirical evaluations further confirm its superior performance in practical applications including hyperparameter tuning and robotic control tasks.
📝 Abstract
The Rising Multi-Armed Bandit (RMAB) framework models environments where expected rewards of arms increase with plays, which models practical scenarios where performance of each option improves with the repeated usage, such as in robotics and hyperparameter tuning. For instance, in hyperparameter tuning, the validation accuracy of a model configuration (arm) typically increases with each training epoch. A defining characteristic of RMAB is em horizon-dependent optimality: unlike standard settings, the optimal strategy here shifts dramatically depending on the available budget $T$. This implies that knowledge of $T$ yields significantly greater utility in RMAB, empowering the learner to align its decision-making with this shifting optimality. However, the horizon-aware setting remains underexplored. To address this, we propose a novel CUmulative Reward Estimation UCB (CURE-UCB) that explicitly integrates the horizon. We provide a rigorous analysis establishing a new regret upper bound and prove that our method strictly outperforms horizon-agnostic strategies in structured environments like ``linear-then-flat''instances. Extensive experiments demonstrate its significant superiority over baselines.