Robust Bayesian Optimization via Tempered Posteriors

📅 2026-01-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the issue of overconfident uncertainty quantification in Gaussian process surrogate models within Bayesian optimization, which often leads to suboptimal queries due to underestimated uncertainty in densely sampled local regions. To mitigate this problem caused by local model misspecification, the authors propose a robust Bayesian optimization framework based on a tempered posterior, where a temperature parameter α ∈ (0,1] down-weights the likelihood function. This approach is combined with a generalized improvement-based acquisition function. Furthermore, they introduce a pre-sequential strategy that adaptively selects α online. Theoretical analysis demonstrates that the proposed method achieves lower worst-case cumulative regret compared to standard posterior inference. Empirical results validate its effectiveness in enhancing both calibration and local stability of the surrogate model.

Technology Category

Application Category

📝 Abstract
Bayesian optimization (BO) iteratively fits a Gaussian process (GP) surrogate to accumulated evaluations and selects new queries via an acquisition function such as expected improvement (EI). In practice, BO often concentrates evaluations near the current incumbent, causing the surrogate to become overconfident and to understate predictive uncertainty in the region guiding subsequent decisions. We develop a robust GP-based BO via tempered posterior updates, which downweight the likelihood by a power $\alpha \in (0,1]$ to mitigate overconfidence under local misspecification. We establish cumulative regret bounds for tempered BO under a family of generalized improvement rules, including EI, and show that tempering yields strictly sharper worst-case regret guarantees than the standard posterior $(\alpha=1)$, with the most favorable guarantees occurring near the classical EI choice. Motivated by our theoretic findings, we propose a prequential procedure for selecting $\alpha$ online: it decreases $\alpha$ when realized prediction errors exceed model-implied uncertainty and returns $\alpha$ toward one as calibration improves. Empirical results demonstrate that tempering provides a practical yet theoretically grounded tool for stabilizing BO surrogates under localized sampling.
Problem

Research questions and friction points this paper is trying to address.

Bayesian optimization
overconfidence
predictive uncertainty
Gaussian process
localized sampling
Innovation

Methods, ideas, or system contributions that make the work stand out.

Tempered Posterior
Bayesian Optimization
Gaussian Process
Overconfidence Mitigation
Adaptive Calibration
🔎 Similar Papers
No similar papers found.
J
Jiguang Li
Booth School of Business, University of Chicago
Hengrui Luo
Hengrui Luo
Unknown affiliation