🤖 AI Summary
This paper addresses sequential decision-making over a set of black-box functions, introducing two novel problems—Functional Multi-Armed Bandits (FMAB) and Optimal Function Identification—to extend the classical multi-armed bandit (MAB) framework for applications such as competitive large-model training. It innovatively models each “arm” as an unknown black-box function and establishes a rigorous theoretical foundation for functional bandits. We propose F-LCB, a UCB-type algorithm grounded in nonlinear optimization convergence rates, enabling the first joint analysis of optimization error and online learning regret. Theoretically, we derive a tight regret upper bound that explicitly depends on the convergence rate of the underlying optimization algorithm. Empirical evaluations demonstrate that F-LCB significantly outperforms standard MAB baselines in function-level selection tasks.
📝 Abstract
Bandit optimization usually refers to the class of online optimization problems with limited feedback, namely, a decision maker uses only the objective value at the current point to make a new decision and does not have access to the gradient of the objective function. While this name accurately captures the limitation in feedback, it is somehow misleading since it does not have any connection with the multi-armed bandits (MAB) problem class. We propose two new classes of problems: the functional multi-armed bandit problem (FMAB) and the best function identification problem. They are modifications of a multi-armed bandit problem and the best arm identification problem, respectively, where each arm represents an unknown black-box function. These problem classes are a surprisingly good fit for modeling real-world problems such as competitive LLM training. To solve the problems from these classes, we propose a new reduction scheme to construct UCB-type algorithms, namely, the F-LCB algorithm, based on algorithms for nonlinear optimization with known convergence rates. We provide the regret upper bounds for this reduction scheme based on the base algorithms' convergence rates. We add numerical experiments that demonstrate the performance of the proposed scheme.