🤖 AI Summary
In Bayesian optimization, Expected Improvement (EI) is widely adopted, yet its theoretical analysis—particularly for randomized EI variants based on posterior sampling—has long lacked rigorous guarantees. To address this gap, we propose a novel randomized EI strategy: leveraging Gaussian process modeling, we sample function paths from the posterior distribution and approximate the EI value using the maximum of these sampled paths for sequential decision-making. We establish, for the first time, a sublinear Bayesian cumulative regret bound for this method, thereby filling a critical theoretical void in randomized EI analysis. Our theoretical results demonstrate strong convergence properties. Extensive numerical experiments further confirm that the proposed approach consistently outperforms standard EI and leading benchmark methods in black-box function optimization.
📝 Abstract
Bayesian optimization is a powerful tool for optimizing an expensive-to-evaluate black-box function. In particular, the effectiveness of expected improvement (EI) has been demonstrated in a wide range of applications. However, theoretical analyses of EI are limited compared with other theoretically established algorithms. This paper analyzes a randomized variant of EI, which evaluates the EI from the maximum of the posterior sample path. We show that this posterior sampling-based random EI achieves the sublinear Bayesian cumulative regret bounds under the assumption that the black-box function follows a Gaussian process. Finally, we demonstrate the effectiveness of the proposed method through numerical experiments.