Cost-aware Stopping for Bayesian Optimization

📅 2025-07-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Early stopping in Bayesian optimization (BO) for expensive black-box function evaluations remains challenging due to the lack of principled, cost-aware criteria. Method: We propose an adaptive, hyperparameter-free stopping rule grounded in theoretical analysis. First, we establish a rigorous theoretical connection between state-of-the-art cost-aware acquisition functions—such as PBGI and log EI/cost—and cumulative evaluation cost, deriving tight, closed-form upper bounds without heuristic tuning. Second, we formulate a theoretically justified stopping criterion by integrating Pandora’s Box Gittins Index with unit-cost expected improvement. Results: Empirical evaluation on hyperparameter optimization and neural architecture search demonstrates that, when combined with PBGI, our rule achieves superior or competitive performance in cost-adjusted simple regret—outperforming or matching existing methods—while significantly improving evaluation efficiency and theoretical soundness.

Technology Category

Application Category

📝 Abstract
In automated machine learning, scientific discovery, and other applications of Bayesian optimization, deciding when to stop evaluating expensive black-box functions is an important practical consideration. While several adaptive stopping rules have been proposed, in the cost-aware setting they lack guarantees ensuring they stop before incurring excessive function evaluation costs. We propose a cost-aware stopping rule for Bayesian optimization that adapts to varying evaluation costs and is free of heuristic tuning. Our rule is grounded in a theoretical connection to state-of-the-art cost-aware acquisition functions, namely the Pandora's Box Gittins Index (PBGI) and log expected improvement per cost. We prove a theoretical guarantee bounding the expected cumulative evaluation cost incurred by our stopping rule when paired with these two acquisition functions. In experiments on synthetic and empirical tasks, including hyperparameter optimization and neural architecture size search, we show that combining our stopping rule with the PBGI acquisition function consistently matches or outperforms other acquisition-function--stopping-rule pairs in terms of cost-adjusted simple regret, a metric capturing trade-offs between solution quality and cumulative evaluation cost.
Problem

Research questions and friction points this paper is trying to address.

Deciding when to stop expensive Bayesian optimization evaluations
Lack of cost-aware stopping rules with theoretical guarantees
Need for adaptive stopping rules without heuristic tuning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Cost-aware stopping rule for Bayesian optimization
Adapts to varying evaluation costs
Theoretical guarantee for cumulative cost
🔎 Similar Papers
No similar papers found.