Thompson Exploration with Best Challenger Rule in Best Arm Identification

📅 2023-10-01
🏛️ Asian Conference on Machine Learning
📈 Citations: 3
Influential: 0
📄 PDF
🤖 AI Summary
Existing fixed-confidence best-arm identification (BAI) methods either require solving an optimization problem per round or enforce uniform exploration, limiting adaptability to non-Gaussian reward distributions. Method: We propose a novel Bayesian BAI strategy that integrates Thompson sampling with an optimal challenger rule—marking the first natural incorporation of Thompson sampling into the fixed-confidence BAI framework. It eliminates real-time optimization and minimum exploration constraints. Contribution/Results: We establish a β-optimality analysis framework: proving asymptotic optimality for the two-arm case and providing approximate optimality guarantees for $K geq 3$. Empirically, our method achieves sample complexity competitive with asymptotically optimal algorithms while significantly reducing computational overhead. The core innovation lies in introducing a new Bayesian BAI paradigm that jointly ensures theoretical rigor—via provable near-optimal sample complexity—and computational efficiency—through closed-form posterior updates and no per-round optimization.
📝 Abstract
This paper studies the fixed-confidence best arm identification (BAI) problem in the bandit framework in the canonical single-parameter exponential models. For this problem, many policies have been proposed, but most of them require solving an optimization problem at every round and/or are forced to explore an arm at least a certain number of times except those restricted to the Gaussian model. To address these limitations, we propose a novel policy that combines Thompson sampling with a computationally efficient approach known as the best challenger rule. While Thompson sampling was originally considered for maximizing the cumulative reward, we demonstrate that it can be used to naturally explore arms in BAI without forcing it. We show that our policy is asymptotically optimal for any two-armed bandit problems and achieves near optimality for general $K$-armed bandit problems for $Kgeq 3$. Nevertheless, in numerical experiments, our policy shows competitive performance compared to asymptotically optimal policies in terms of sample complexity while requiring less computation cost. In addition, we highlight the advantages of our policy by comparing it to the concept of $eta$-optimality, a relaxed notion of asymptotic optimality commonly considered in the analysis of a class of policies including the proposed one.
Problem

Research questions and friction points this paper is trying to address.

Optimize best arm identification in bandit problems
Reduce computational cost in Thompson sampling policies
Achieve near optimality for K-armed bandit problems
Innovation

Methods, ideas, or system contributions that make the work stand out.

Combines Thompson sampling with best challenger rule
Asymptotically optimal for two-armed bandit problems
Near optimal for general K-armed bandit problems
🔎 Similar Papers
No similar papers found.