🤖 AI Summary
To address the inefficiency in model and hyperparameter selection for non-expert users in automated machine learning (AutoML), this paper proposes an efficient search method grounded in the multi-armed bandit (MAB) framework. The method integrates Bayesian optimization with an adaptive continuous filtering mechanism: Gaussian process–based upper confidence bound (UCB) is employed to prune suboptimal configurations early, while a Softmax policy dynamically allocates evaluation resources to balance exploration and exploitation. Crucially, configuration selection and resource scheduling are jointly optimized within a unified framework. Experiments across diverse time budgets demonstrate that the proposed approach significantly accelerates the AutoML pipeline, achieving superior predictive performance and anytime performance compared to current state-of-the-art methods.
📝 Abstract
Machine learning has been making great success in many application areas. However, for the non-expert practitioners, it is always very challenging to address a machine learning task successfully and efficiently. Finding the optimal machine learning model or the hyperparameter combination set from a large number of possible alternatives usually requires considerable expert knowledge and experience. To tackle this problem, we propose a combined Bayesian Optimization and Adaptive Successive Filtering algorithm (BOASF) under a unified multi-armed bandit framework to automate the model selection or the hyperparameter optimization. Specifically, BOASF consists of multiple evaluation rounds in each of which we select promising configurations for each arm using the Bayesian optimization. Then, ASF can early discard the poor-performed arms adaptively using a Gaussian UCB-based probabilistic model. Furthermore, a Softmax model is employed to adaptively allocate available resources for each promising arm that advances to the next round. The arm with a higher probability of advancing will be allocated more resources. Experimental results show that BOASF is effective for speeding up the model selection and hyperparameter optimization processes while achieving robust and better prediction performance than the existing state-of-the-art automatic machine learning methods. Moreover, BOASF achieves better anytime performance under various time budgets.