🤖 AI Summary
This paper studies the 1-identification problem in pure-exploration multi-armed bandits: determining, with confidence at least $1-delta$, whether there exists an arm whose mean reward is at least a given threshold $mu_0$, or outputting “None” otherwise. Addressing the open problem of non-asymptotic sample complexity analysis left by Degenne & Koolen (2019), we propose the Sequential-Exploration-Exploitation (SEE) algorithm, which integrates adaptive sampling, tight confidence interval construction, and a dynamic stopping rule. We establish matching upper and lower bounds on its sample complexity—tight up to polynomial logarithmic factors—achieving near-optimality in the non-asymptotic regime. Numerical experiments demonstrate that SEE significantly reduces sample consumption compared to existing benchmark methods.
📝 Abstract
Motivated by an open direction in existing literature, we study the 1-identification problem, a fundamental multi-armed bandit formulation on pure exploration. The goal is to determine whether there exists an arm whose mean reward is at least a known threshold $mu_0$, or to output None if it believes such an arm does not exist. The agent needs to guarantee its output is correct with probability at least $1-delta$. Degenne&Koolen 2019 has established the asymptotically tight sample complexity for the 1-identification problem, but they commented that the non-asymptotic analysis remains unclear. We design a new algorithm Sequential-Exploration-Exploitation (SEE), and conduct theoretical analysis from the non-asymptotic perspective. Novel to the literature, we achieve near optimality, in the sense of matching upper and lower bounds on the pulling complexity. The gap between the upper and lower bounds is up to a polynomial logarithmic factor. The numerical result also indicates the effectiveness of our algorithm, compared to existing benchmarks.