🤖 AI Summary
This paper addresses the fixed-confidence best-arm identification (BAI) problem in nonstationary stochastic multi-armed bandits where reward variances decay over time. Unlike conventional BAI methods—which ignore variance nonstationarity and optimize only for sample complexity—we propose a weighted cost function jointly minimizing both sampling effort and stopping time. We innovatively model the time-varying reward variance and uncover a novel trade-off between sampling delay and stopping time. To achieve confidence-guaranteed identification without continuous sampling, we design two adaptive strategies: (i) segmented sampling with waiting periods and (ii) periodic weighted averaging. We establish the asymptotic optimality of our approach under mild regularity conditions. Monte Carlo simulations demonstrate that our method significantly reduces total cost—defined as the sum of sample count and wall-clock time—while strictly satisfying the prescribed confidence level, outperforming classical BAI baselines.
📝 Abstract
We focus on the problem of best-arm identification in a stochastic multi-arm bandit with temporally decreasing variances for the arms' rewards. We model arm rewards as Gaussian random variables with fixed means and variances that decrease with time. The cost incurred by the learner is modeled as a weighted sum of the time needed by the learner to identify the best arm, and the number of samples of arms collected by the learner before termination. Under this cost function, there is an incentive for the learner to not sample arms in all rounds, especially in the initial rounds. On the other hand, not sampling increases the termination time of the learner, which also increases cost. This trade-off necessitates new sampling strategies. We propose two policies. The first policy has an initial wait period with no sampling followed by continuous sampling. The second policy samples periodically and uses a weighted average of the rewards observed to identify the best arm. We provide analytical guarantees on the performance of both policies and supplement our theoretical results with simulations which show that our polices outperform the state-of-the-art policies for the classical best arm identification problem.