🤖 AI Summary
Bayesian optimization (BO) suffers from poor robustness under extreme outliers; existing provably robust methods rely on bounded contamination magnitude assumptions, rendering them ineffective against single high-intensity attacks. To address this, we propose a novel adversarial model that constrains only the *frequency* of outliers—allowing their magnitudes to be unbounded. Based on this model, we design RCGP-UCB, a robust BO algorithm integrating Robust Conjugate Gaussian Processes (RCGP) with Upper Confidence Bound (UCB) acquisition. RCGP-UCB jointly optimizes model stability and exploration efficiency. We provide theoretical guarantees: under at most $O(T^{1/2})$ or $O(T^{1/3})$ unbounded contaminations, the algorithm achieves sublinear regret; in the absence of contamination, its performance matches standard GP-UCB exactly—achieving strong robustness at near-zero cost. To the best of our knowledge, this is the first BO framework offering rigorous regret bounds under frequency-constrained, magnitude-unbounded contamination.
📝 Abstract
Bayesian Optimization is critically vulnerable to extreme outliers. Existing provably robust methods typically assume a bounded cumulative corruption budget, which makes them defenseless against even a single corruption of sufficient magnitude. To address this, we introduce a new adversary whose budget is only bounded in the frequency of corruptions, not in their magnitude. We then derive RCGP-UCB, an algorithm coupling the famous upper confidence bound (UCB) approach with a Robust Conjugate Gaussian Process (RCGP). We present stable and adaptive versions of RCGP-UCB, and prove that they achieve sublinear regret in the presence of up to $O(T^{1/2})$ and $O(T^{1/3})$ corruptions with possibly infinite magnitude. This robustness comes at near zero cost: without outliers, RCGP-UCB's regret bounds match those of the standard GP-UCB algorithm.