🤖 AI Summary
For costly, constrained black-box optimization problems, existing Bayesian optimization (BO) methods incur high overhead due to reliance on high-fidelity evaluations and lack both feasibility guarantees and systematic stopping criteria. This paper proposes the Constraint-aware Multi-Fidelity BO (CMFBO) framework: (i) it is the first to jointly model variable constraints across fidelity levels; (ii) it introduces a cost-aware acquisition function that leverages low-fidelity data to substantially reduce total sampling cost; and (iii) it incorporates a theoretically grounded, adaptive stopping criterion based on statistical significance for automatic convergence detection. On multiple benchmarks, CMFBO reduces total cost by 30–65% over state-of-the-art methods while achieving a constraint satisfaction rate of 99.2%. The open-source implementation is integrated into the GP+ Python package.
📝 Abstract
Bayesian optimization (BO) is increasingly employed in critical applications to find the optimal design with minimal cost. While BO is known for its sample efficiency, relying solely on costly high-fidelity data can still result in high costs. This is especially the case in constrained search spaces where BO must not only optimize but also ensure feasibility. A related issue in the BO literature is the lack of a systematic stopping criterion. To solve these challenges, we develop a constrained cost-aware multi-fidelity BO (CMFBO) framework whose goal is to minimize overall sampling costs by utilizing inexpensive low-fidelity sources while ensuring feasibility. In our case, the constraints can change across the data sources and may be even black-box functions. We also introduce a systematic stopping criterion that addresses the long-lasting issue associated with BO's convergence assessment. Our framework is publicly available on GitHub through the GP+ Python package and herein we validate it's efficacy on multiple benchmark problems.