🤖 AI Summary
This work addresses the challenges of high-dimensional black-box constrained optimization, where function evaluations are expensive, gradient information is unavailable, and the feasible region is complex. The authors propose a Bayesian optimization method that integrates a penalty function with a trust-region mechanism. By incorporating constraints into an unconstrained formulation via penalty terms, the approach constructs a local Gaussian process surrogate model and performs sampling within a dynamically adjusted trust region using the expected improvement criterion, thereby effectively balancing exploration and exploitation. Experimental results demonstrate that the proposed method consistently achieves high-quality feasible solutions with significantly fewer function evaluations than state-of-the-art approaches across multiple high-dimensional synthetic and real-world constrained optimization problems, yielding notable improvements in both sample efficiency and optimization stability.
📝 Abstract
Constrained optimization in high-dimensional black-box settings is difficult due to expensive evaluations, the lack of gradient information, and complex feasibility regions. In this work, we propose a Bayesian optimization method that combines a penalty formulation, a surrogate model, and a trust region strategy. The constrained problem is converted to an unconstrained form by penalizing constraint violations, which provides a unified modeling framework. A trust region restricts the search to a local region around the current best solution, which improves stability and efficiency in high dimensions. Within this region, we use the Expected Improvement acquisition function to select evaluation points by balancing improvement and uncertainty. The proposed Trust Region method integrates penalty-based constraint handling with local surrogate modeling. This combination enables efficient exploration of feasible regions while maintaining sample efficiency. We compare the proposed method with state-of-the-art methods on synthetic and real-world high-dimensional constrained optimization problems. The results show that the method identifies high-quality feasible solutions with fewer evaluations and maintains stable performance across different settings.