🤖 AI Summary
Existing time-varying Bayesian optimization (TVBO) methods for optimizing expensive, black-box functions with temporal dynamics lack theoretical guarantees or rely on overly restrictive assumptions.
Method: We establish the first asymptotic cumulative regret bound for TVBO under realistic, mild assumptions—specifically, bounded function variation rate—and derive both an algorithm-independent lower bound and a universal upper bound. Based on these theoretical results, we propose BOLT, a novel TVBO algorithm integrating Gaussian process modeling, dynamic regret analysis, and an adaptive acquisition function—without requiring prior knowledge of the change pattern.
Results: BOLT achieves statistically significant improvements over state-of-the-art TVBO methods on synthetic benchmarks and real-world time-varying tasks, including hyperparameter tuning and robot control. Our theoretical framework yields general design principles for TVBO, and empirical validation confirms both the practical efficacy and theoretical soundness of the approach.
📝 Abstract
Time-Varying Bayesian Optimization (TVBO) is the go-to framework for optimizing a time-varying, expensive, noisy black-box function. However, most of the solutions proposed so far either rely on unrealistic assumptions on the nature of the objective function or do not offer any theoretical guarantees. We propose the first analysis that asymptotically bounds the cumulative regret of TVBO algorithms under mild and realistic assumptions only. In particular, we provide an algorithm-independent lower regret bound and an upper regret bound that holds for a large class of TVBO algorithms. Based on this analysis, we formulate recommendations for TVBO algorithms and show how an algorithm (BOLT) that follows them performs better than the state-of-the-art of TVBO through experiments on synthetic and real-world problems.