🤖 AI Summary
Existing fixed-confidence best-arm identification (BAI) methods either require solving an optimization problem per round or enforce uniform exploration, limiting adaptability to non-Gaussian reward distributions.
Method: We propose a novel Bayesian BAI strategy that integrates Thompson sampling with an optimal challenger rule—marking the first natural incorporation of Thompson sampling into the fixed-confidence BAI framework. It eliminates real-time optimization and minimum exploration constraints.
Contribution/Results: We establish a β-optimality analysis framework: proving asymptotic optimality for the two-arm case and providing approximate optimality guarantees for $K geq 3$. Empirically, our method achieves sample complexity competitive with asymptotically optimal algorithms while significantly reducing computational overhead. The core innovation lies in introducing a new Bayesian BAI paradigm that jointly ensures theoretical rigor—via provable near-optimal sample complexity—and computational efficiency—through closed-form posterior updates and no per-round optimization.
📝 Abstract
This paper studies the fixed-confidence best arm identification (BAI) problem in the bandit framework in the canonical single-parameter exponential models. For this problem, many policies have been proposed, but most of them require solving an optimization problem at every round and/or are forced to explore an arm at least a certain number of times except those restricted to the Gaussian model. To address these limitations, we propose a novel policy that combines Thompson sampling with a computationally efficient approach known as the best challenger rule. While Thompson sampling was originally considered for maximizing the cumulative reward, we demonstrate that it can be used to naturally explore arms in BAI without forcing it. We show that our policy is asymptotically optimal for any two-armed bandit problems and achieves near optimality for general $K$-armed bandit problems for $Kgeq 3$. Nevertheless, in numerical experiments, our policy shows competitive performance compared to asymptotically optimal policies in terms of sample complexity while requiring less computation cost. In addition, we highlight the advantages of our policy by comparing it to the concept of $eta$-optimality, a relaxed notion of asymptotic optimality commonly considered in the analysis of a class of policies including the proposed one.