🤖 AI Summary
This paper studies the high-confidence ($1-delta$) best-arm identification problem in the batched setting, aiming to jointly minimize sample complexity (total number of arm pulls) and batch complexity (total number of batches). We propose Tri-BBAI, the first three-batch algorithm achieving asymptotically optimal sample complexity with an expected batch count of only three. Furthermore, we design Opt-BBAI, which, for finite $delta$, simultaneously approaches the non-asymptotic lower bounds on both sample and batch complexities—guaranteeing bounded complexity unconditionally. Our approach innovatively integrates adaptive batch scheduling, tight confidence interval estimation, and a novel elimination verification mechanism, eliminating reliance on the “successful return” assumption. Both theoretical analysis and empirical evaluation demonstrate that our algorithms significantly outperform existing methods, achieving optimal trade-offs between accuracy and efficiency.
📝 Abstract
We study the batched best arm identification (BBAI) problem, where the learner's goal is to identify the best arm while switching the policy as less as possible. In particular, we aim to find the best arm with probability $1-delta$ for some small constant $delta>0$ while minimizing both the sample complexity (total number of arm pulls) and the batch complexity (total number of batches). We propose the three-batch best arm identification (Tri-BBAI) algorithm, which is the first batched algorithm that achieves the optimal sample complexity in the asymptotic setting (i.e., $delta
ightarrow 0$) and runs in $3$ batches in expectation. Based on Tri-BBAI, we further propose the almost optimal batched best arm identification (Opt-BBAI) algorithm, which is the first algorithm that achieves the near-optimal sample and batch complexity in the non-asymptotic setting (i.e., $delta$ is finite), while enjoying the same batch and sample complexity as Tri-BBAI when $delta$ tends to zero. Moreover, in the non-asymptotic setting, the complexity of previous batch algorithms is usually conditioned on the event that the best arm is returned (with a probability of at least $1-delta$), which is potentially unbounded in cases where a sub-optimal arm is returned. In contrast, the complexity of Opt-BBAI does not rely on such an event. This is achieved through a novel procedure that we design for checking whether the best arm is eliminated, which is of independent interest.