Time-Varying Gaussian Process Bandits with Unknown Prior

📅 2024-02-02
📈 Citations: 2
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of unknown priors in time-varying Bayesian optimization. We propose PE-GP-UCB, a novel algorithm for Gaussian process optimization under nonstationary environments: it dynamically selects statistically consistent priors via prior enumeration and Type-II maximum likelihood estimation, then integrates them with an upper-confidence-bound (UCB) acquisition strategy for adaptive decision-making. To our knowledge, this is the first method to achieve a provably sublinear regret bound without assuming knowledge of the true prior or imposing strong stationarity conditions. Our theoretical analysis rigorously characterizes the joint impact of prior uncertainty and temporal nonstationarity on regret. Empirical evaluations demonstrate that PE-GP-UCB significantly outperforms baseline approaches—including MLE-GP, full Bayesian inference, and Regret Balancing—on both synthetic benchmarks and real-world time-series optimization tasks.

Technology Category

Application Category

📝 Abstract
Bayesian optimisation requires fitting a Gaussian process model, which in turn requires specifying prior on the unknown black-box function -- most of the theoretical literature assumes this prior is known. However, it is common to have more than one possible prior for a given black-box function, for example suggested by domain experts with differing opinions. In some cases, the type-II maximum likelihood estimator for selecting prior enjoys the consistency guarantee, but it does not universally apply to all types of priors. If the problem is stationary, one could rely on the Regret Balancing scheme to conduct the optimisation, but in the case of time-varying problems, such a scheme cannot be used. To address this gap in existing research, we propose a novel algorithm, PE-GP-UCB, which is capable of solving time-varying Bayesian optimisation problems even without the exact knowledge of the function's prior. The algorithm relies on the fact that either the observed function values are consistent with some of the priors, in which case it is easy to reject the wrong priors, or the observations are consistent with all candidate priors, in which case it does not matter which prior our model relies on. We provide a regret bound on the proposed algorithm. Finally, we empirically evaluate our algorithm on toy and real-world time-varying problems and show that it outperforms the maximum likelihood estimator, fully Bayesian treatment of unknown prior and Regret Balancing.
Problem

Research questions and friction points this paper is trying to address.

Addresses time-varying Bayesian optimization challenges
Develops algorithm for unknown function priors
Empirically outperforms existing methods
Innovation

Methods, ideas, or system contributions that make the work stand out.

PE-GP-UCB algorithm
time-varying Bayesian optimisation
unknown prior handling
🔎 Similar Papers
No similar papers found.