🤖 AI Summary
This work addresses the challenge of efficiently optimizing expensive black-box functions under parallel noisy evaluations, where existing Bayesian optimization methods often lack theoretical guarantees or exhibit suboptimal empirical performance. The authors propose Randomized Kriging Believer (Randomized KB), a novel approach that enhances the classical Kriging Believer heuristic through a randomization mechanism. This method maintains low computational complexity and implementation simplicity while supporting asynchronous parallelism and compatibility with diverse Bayesian optimization frameworks. Notably, it provides the first Bayesian expected regret bound for KB-based parallel Bayesian optimization, offering rigorous theoretical justification. Empirical evaluations on synthetic functions, standard benchmarks, and real-world simulators demonstrate that Randomized KB consistently achieves superior optimization efficiency, aligning well with the established regret bound.
📝 Abstract
We consider an optimization problem of an expensive-to-evaluate black-box function, in which we can obtain noisy function values in parallel. For this problem, parallel Bayesian optimization (PBO) is a promising approach, which aims to optimize with fewer function evaluations by selecting a diverse input set for parallel evaluation. However, existing PBO methods suffer from poor practical performance or lack theoretical guarantees. In this study, we propose a PBO method, called randomized kriging believer (KB), based on a well-known KB heuristic and inheriting the advantages of the original KB: low computational complexity, a simple implementation, versatility across various BO methods, and applicability to asynchronous parallelization. Furthermore, we show that our randomized KB achieves Bayesian expected regret guarantees. We demonstrate the effectiveness of the proposed method through experiments on synthetic and benchmark functions and emulators of real-world data.