Robust Bayesian Optimization via Localized Online Conformal Prediction

📅 2024-11-26
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Bayesian optimization (BO) suffers from unreliable posteriors and degraded convergence when Gaussian process (GP) models are misspecified. To address this, we propose LOCBO, the first BO framework integrating input-dependent local online conformal prediction. LOCBO dynamically calibrates the GP likelihood to yield spatially adaptive, statistically reliable posterior distributions—without requiring strong modeling assumptions—while guaranteeing marginal coverage and region-adaptive confidence control. It unifies online conformal prediction, likelihood calibration, and posterior denoising, incorporating an input-adaptive thresholding mechanism. We theoretically establish that LOCBO’s iterates converge robustly to the global optimum of the true objective function under mild conditions. Empirically, LOCBO significantly outperforms state-of-the-art BO methods on both synthetic benchmarks and real-world black-box optimization tasks.

Technology Category

Application Category

📝 Abstract
Bayesian optimization (BO) is a sequential approach for optimizing black-box objective functions using zeroth-order noisy observations. In BO, Gaussian processes (GPs) are employed as probabilistic surrogate models to estimate the objective function based on past observations, guiding the selection of future queries to maximize utility. However, the performance of BO heavily relies on the quality of these probabilistic estimates, which can deteriorate significantly under model misspecification. To address this issue, we introduce localized online conformal prediction-based Bayesian optimization (LOCBO), a BO algorithm that calibrates the GP model through localized online conformal prediction (CP). LOCBO corrects the GP likelihood based on predictive sets produced by LOCBO, and the corrected GP likelihood is then denoised to obtain a calibrated posterior distribution on the objective function. The likelihood calibration step leverages an input-dependent calibration threshold to tailor coverage guarantees to different regions of the input space. Under minimal noise assumptions, we provide theoretical performance guarantees for LOCBO's iterates that hold for the unobserved objective function. These theoretical findings are validated through experiments on synthetic and real-world optimization tasks, demonstrating that LOCBO consistently outperforms state-of-the-art BO algorithms in the presence of model misspecification.
Problem

Research questions and friction points this paper is trying to address.

Improves Bayesian optimization under model misspecification
Calibrates GP likelihood via localized conformal prediction
Ensures robust performance with theoretical guarantees
Innovation

Methods, ideas, or system contributions that make the work stand out.

Localized online conformal prediction for calibration
Input-dependent calibration threshold adjustment
Denoised corrected GP likelihood for posterior
🔎 Similar Papers
No similar papers found.