🤖 AI Summary
This paper addresses the resource-constrained combinatorial optimization problem of joint algorithm selection and hyperparameter optimization (CASH) in AutoML. We propose MaxUCB, a novel method that integrates Bayesian optimization principles with the upper confidence bound (UCB) framework to jointly balance model-class exploration and hyperparameter tuning. Its key innovation is the first max k-armed bandit strategy tailored for light-tailed, bounded reward distributions—overcoming the conventional heavy-tailed assumption limitation. Theoretically, we establish convergence guarantees and demonstrate superior sample efficiency. Empirically, MaxUCB achieves state-of-the-art performance across four standard AutoML benchmarks, consistently outperforming existing methods in both convergence speed and final validation accuracy.
📝 Abstract
The Combined Algorithm Selection and Hyperparameter optimization (CASH) is a challenging resource allocation problem in the field of AutoML. We propose MaxUCB, a max $k$-armed bandit method to trade off exploring different model classes and conducting hyperparameter optimization. MaxUCB is specifically designed for the light-tailed and bounded reward distributions arising in this setting and, thus, provides an efficient alternative compared to classic max $k$-armed bandit methods assuming heavy-tailed reward distributions. We theoretically and empirically evaluate our method on four standard AutoML benchmarks, demonstrating superior performance over prior approaches.