Multi-Armed Sampling Problem and the End of Exploration

📅 2025-07-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the exploration-exploitation trade-off in multi-armed sampling, formally defining the problem and proposing a unified temperature-parameterized framework that continuously bridges multi-armed bandits and sampling tasks. Theoretically, we prove that under entropy regularization, optimal convergence is achieved without explicit exploration; we derive a novel regret metric and establish a tight lower bound. Methodologically, our approach integrates information-theoretic analysis, entropy-regularized modeling, and continuous interpolation to yield a simple, efficient algorithm. Our key contribution challenges the conventional wisdom that sampling inherently requires exploration, providing a unifying theoretical foundation for neural samplers, entropy-regularized reinforcement learning (RL), and RL from human feedback (RLHF). Empirical validation demonstrates rapid convergence in fine-tuning and human-feedback settings.

Technology Category

Application Category

📝 Abstract
This paper introduces the framework of multi-armed sampling, as the sampling counterpart to the optimization problem of multi-arm bandits. Our primary motivation is to rigorously examine the exploration-exploitation trade-off in the context of sampling. We systematically define plausible notions of regret for this framework and establish corresponding lower bounds. We then propose a simple algorithm that achieves these optimal regret bounds. Our theoretical results demonstrate that in contrast to optimization, sampling does not require exploration. To further connect our findings with those of multi-armed bandits, we define a continuous family of problems and associated regret measures that smoothly interpolates and unifies multi-armed sampling and multi-armed bandit problems using a temperature parameter. We believe the multi-armed sampling framework, and our findings in this setting can have a foundational role in the study of sampling including recent neural samplers, akin to the role of multi-armed bandits in reinforcement learning. In particular, our work sheds light on the need for exploration and the convergence properties of algorithm for entropy-regularized reinforcement learning, fine-tuning of pretrained models and reinforcement learning with human feedback (RLHF).
Problem

Research questions and friction points this paper is trying to address.

Examines exploration-exploitation trade-off in multi-armed sampling
Proposes optimal algorithm for sampling without exploration
Unifies sampling and bandit problems via temperature parameter
Innovation

Methods, ideas, or system contributions that make the work stand out.

Proposes multi-armed sampling framework
Optimal algorithm without exploration
Unifies sampling and bandits via temperature
🔎 Similar Papers
No similar papers found.