🤖 AI Summary
This work addresses diversity optimization for high-dimensional black-box functions. We propose divTuRBO1—the first adaptation of Trust-Region Bayesian Optimization (TuRBO) to diversity-driven settings. Methodologically, it integrates a Gaussian process surrogate model, dynamic trust-region management, and inter-solution distance constraints; further, it introduces two search strategies—sequential and interleaved—to jointly enhance solution quality and distributional diversity under limited function evaluations. Experiments on benchmark functions spanning 2–20 dimensions demonstrate that divTuRBO1 significantly outperforms baselines such as ROBOT. Notably, in high-dimensional regimes (≥10D), it achieves higher-quality and more broadly distributed diverse solution sets using fewer function evaluations. These results empirically validate that the trust-region mechanism effectively enhances diversity optimization performance.
📝 Abstract
Bayesian optimisation (BO) is a surrogate-based optimisation technique that efficiently solves expensive black-box functions with small evaluation budgets. Recent studies consider trust regions to improve the scalability of BO approaches when the problem space scales to more dimensions. Motivated by this research, we explore the effectiveness of trust region-based BO algorithms for diversity optimisation in different dimensional black box problems. We propose diversity optimisation approaches extending TuRBO1, which is the first BO method that uses a trust region-based approach for scalability. We extend TuRBO1 as divTuRBO1, which finds an optimal solution while maintaining a given distance threshold relative to a reference solution set. We propose two approaches to find diverse solutions for black-box functions by combining divTuRBO1 runs in a sequential and an interleaving fashion. We conduct experimental investigations on the proposed algorithms and compare their performance with that of the baseline method, ROBOT (rank-ordered Bayesian optimisation with trust regions). We evaluate proposed algorithms on benchmark functions with dimensions 2 to 20. Experimental investigations demonstrate that the proposed methods perform well, particularly in larger dimensions, even with a limited evaluation budget.