🤖 AI Summary
This work addresses the challenge in time-varying Bayesian optimization (TVBO) where the rate of change of the objective function is unknown. To this end, we propose an adaptive optimization framework that requires no prior assumptions on the change rate. Methodologically, we introduce an event-triggered mechanism that dynamically detects model mismatch and resets the Gaussian process (GP) surrogate based on a probabilistically consistent error bound; this is integrated with an upper-confidence-bound (UCB) sampling strategy for online sequential optimization. Theoretically, we are the first to employ a probabilistically consistent error bound for triggering GP model updates, and we rigorously establish a sublinear regret bound under adaptive reset conditions. Experiments on synthetic and real-world benchmarks demonstrate that our method significantly outperforms standard GP-UCB and existing TVBO variants, exhibiting strong robustness and minimal sensitivity to hyperparameter tuning.
📝 Abstract
We consider the problem of sequentially optimizing a time-varying objective function using time-varying Bayesian optimization (TVBO). Current approaches to TVBO require prior knowledge of a constant rate of change to cope with stale data arising from time variations. However, in practice, the rate of change is usually unknown. We propose an event-triggered algorithm, ET-GP-UCB, that treats the optimization problem as static until it detects changes in the objective function and then resets the dataset. This allows the algorithm to adapt online to realized temporal changes without the need for exact prior knowledge. The event trigger is based on probabilistic uniform error bounds used in Gaussian process regression. We derive regret bounds for adaptive resets without exact prior knowledge of the temporal changes and show in numerical experiments that ET-GP-UCB outperforms competing GP-UCB algorithms on both synthetic and real-world data. The results demonstrate that ET-GP-UCB is readily applicable without extensive hyperparameter tuning.