Regret Analysis of Posterior Sampling-Based Expected Improvement for Bayesian Optimization

📅 2025-07-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In Bayesian optimization, Expected Improvement (EI) is widely adopted, yet its theoretical analysis—particularly for randomized EI variants based on posterior sampling—has long lacked rigorous guarantees. To address this gap, we propose a novel randomized EI strategy: leveraging Gaussian process modeling, we sample function paths from the posterior distribution and approximate the EI value using the maximum of these sampled paths for sequential decision-making. We establish, for the first time, a sublinear Bayesian cumulative regret bound for this method, thereby filling a critical theoretical void in randomized EI analysis. Our theoretical results demonstrate strong convergence properties. Extensive numerical experiments further confirm that the proposed approach consistently outperforms standard EI and leading benchmark methods in black-box function optimization.

Technology Category

Application Category

📝 Abstract
Bayesian optimization is a powerful tool for optimizing an expensive-to-evaluate black-box function. In particular, the effectiveness of expected improvement (EI) has been demonstrated in a wide range of applications. However, theoretical analyses of EI are limited compared with other theoretically established algorithms. This paper analyzes a randomized variant of EI, which evaluates the EI from the maximum of the posterior sample path. We show that this posterior sampling-based random EI achieves the sublinear Bayesian cumulative regret bounds under the assumption that the black-box function follows a Gaussian process. Finally, we demonstrate the effectiveness of the proposed method through numerical experiments.
Problem

Research questions and friction points this paper is trying to address.

Analyzes regret bounds of posterior sampling-based EI
Focuses on Bayesian optimization for black-box functions
Theoretical analysis under Gaussian process assumption
Innovation

Methods, ideas, or system contributions that make the work stand out.

Randomized variant of Expected Improvement
Posterior sampling-based random EI
Sublinear Bayesian cumulative regret bounds
🔎 Similar Papers
No similar papers found.
Shion Takeno
Shion Takeno
Nagoya university
Machine LearningBayesian optimization
Y
Yu Inatsu
Nagoya Institute of Technology
Masayuki Karasuyama
Masayuki Karasuyama
Nagoya Institute of Technology
machine learning
I
Ichiro Takeuchi
Nagoya University, RIKEN AIP