BOASF: A Unified Framework for Speeding up Automatic Machine Learning via Adaptive Successive Filtering

📅 2025-07-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the inefficiency in model and hyperparameter selection for non-expert users in automated machine learning (AutoML), this paper proposes an efficient search method grounded in the multi-armed bandit (MAB) framework. The method integrates Bayesian optimization with an adaptive continuous filtering mechanism: Gaussian process–based upper confidence bound (UCB) is employed to prune suboptimal configurations early, while a Softmax policy dynamically allocates evaluation resources to balance exploration and exploitation. Crucially, configuration selection and resource scheduling are jointly optimized within a unified framework. Experiments across diverse time budgets demonstrate that the proposed approach significantly accelerates the AutoML pipeline, achieving superior predictive performance and anytime performance compared to current state-of-the-art methods.

Technology Category

Application Category

📝 Abstract
Machine learning has been making great success in many application areas. However, for the non-expert practitioners, it is always very challenging to address a machine learning task successfully and efficiently. Finding the optimal machine learning model or the hyperparameter combination set from a large number of possible alternatives usually requires considerable expert knowledge and experience. To tackle this problem, we propose a combined Bayesian Optimization and Adaptive Successive Filtering algorithm (BOASF) under a unified multi-armed bandit framework to automate the model selection or the hyperparameter optimization. Specifically, BOASF consists of multiple evaluation rounds in each of which we select promising configurations for each arm using the Bayesian optimization. Then, ASF can early discard the poor-performed arms adaptively using a Gaussian UCB-based probabilistic model. Furthermore, a Softmax model is employed to adaptively allocate available resources for each promising arm that advances to the next round. The arm with a higher probability of advancing will be allocated more resources. Experimental results show that BOASF is effective for speeding up the model selection and hyperparameter optimization processes while achieving robust and better prediction performance than the existing state-of-the-art automatic machine learning methods. Moreover, BOASF achieves better anytime performance under various time budgets.
Problem

Research questions and friction points this paper is trying to address.

Automating model selection for non-expert practitioners
Optimizing hyperparameters efficiently without expert knowledge
Speeding up AutoML with adaptive resource allocation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Combines Bayesian Optimization and Adaptive Successive Filtering
Uses Gaussian UCB for early poor-performance discarding
Employs Softmax for adaptive resource allocation
🔎 Similar Papers
No similar papers found.
G
Guanghui Zhu
State Key Laboratory for Novel Software Technology, Nanjing University, China
Xin Fang
Xin Fang
State Key Laboratory for Novel Software Technology, Nanjing University, China
L
Lei Wang
State Key Laboratory for Novel Software Technology, Nanjing University, China
W
Wenzhong Chen
State Key Laboratory for Novel Software Technology, Nanjing University, China
Rong Gu
Rong Gu
Mälardalen University
Formal MethodsMachine LearningAutonomous Systems
Chunfeng Yuan
Chunfeng Yuan
National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences
computer visionPattern RecognitionMachine LearningHuman Action RecognitionSparse Representation
Y
Yihua Huang
State Key Laboratory for Novel Software Technology, Nanjing University, China