Rising Multi-Armed Bandits with Known Horizons

📅 2026-02-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the multi-armed bandit problem with rewards that increase with usage frequency—termed Rising MAB—under a known time budget. Recognizing that optimal policies are highly sensitive to the available budget, the authors propose the CURE-UCB algorithm, which explicitly incorporates horizon information into the construction of upper confidence bounds to guide action selection. Theoretical analysis yields a tighter regret upper bound, demonstrating that CURE-UCB strictly outperforms existing strategies that ignore time budget constraints, particularly in structured environments such as those with linear post-plateau reward dynamics. Empirical evaluations further confirm its superior performance in practical applications including hyperparameter tuning and robotic control tasks.

Technology Category

Application Category

📝 Abstract
The Rising Multi-Armed Bandit (RMAB) framework models environments where expected rewards of arms increase with plays, which models practical scenarios where performance of each option improves with the repeated usage, such as in robotics and hyperparameter tuning. For instance, in hyperparameter tuning, the validation accuracy of a model configuration (arm) typically increases with each training epoch. A defining characteristic of RMAB is em horizon-dependent optimality: unlike standard settings, the optimal strategy here shifts dramatically depending on the available budget $T$. This implies that knowledge of $T$ yields significantly greater utility in RMAB, empowering the learner to align its decision-making with this shifting optimality. However, the horizon-aware setting remains underexplored. To address this, we propose a novel CUmulative Reward Estimation UCB (CURE-UCB) that explicitly integrates the horizon. We provide a rigorous analysis establishing a new regret upper bound and prove that our method strictly outperforms horizon-agnostic strategies in structured environments like ``linear-then-flat''instances. Extensive experiments demonstrate its significant superiority over baselines.
Problem

Research questions and friction points this paper is trying to address.

Rising Multi-Armed Bandits
horizon-dependent optimality
known horizons
cumulative reward
multi-armed bandits
Innovation

Methods, ideas, or system contributions that make the work stand out.

Rising Multi-Armed Bandits
horizon-aware
CURE-UCB
cumulative reward estimation
regret bound
🔎 Similar Papers
No similar papers found.
S
Seockbean Song
Graduate School of AI, POSTECH, Pohang, Republic of Korea
C
Chenyu Gan
Qiuzhen College, Tsinghua University, Beijing, China
Y
Youngsik Yoon
Department of CSE, POSTECH, Pohang, Republic of Korea
Siwei Wang
Siwei Wang
National University of Defense Technology
Large-graph studymulti-view fusionmulti-view clustering
Wei Chen
Wei Chen
Microsoft
Video compressionvideo quality assessment3dstereoscopic
Jungseul Ok
Jungseul Ok
Associate Professor, CSE/AI, POSTECH
Reinforcement LearningMachine Learning