Provable Accelerated Bayesian Optimization with Knowledge Transfer

📅 2025-11-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the lack of theoretical guarantees and suboptimal regret bounds in knowledge-transfer Bayesian optimization (BO). We propose a novel transfer method with rigorous theoretical foundations. Methodologically, we introduce an uncertainty quantification mechanism based on the discrepancy δ between source and target functions—allowing them to reside in distinct reproducing kernel Hilbert spaces—and enable efficient multi-source knowledge transfer via discrepancy function modeling and information-gain analysis. Theoretically, we derive a cumulative regret bound of $ ilde{mathcal{O}}(sqrt{T(T/N + gamma_delta)})$, which strictly improves upon the standard non-transfer BO bound $ ilde{mathcal{O}}(sqrt{Tgamma_T})$. Empirically, our approach consistently outperforms state-of-the-art baselines on hyperparameter tuning and synthetic function benchmarks, demonstrating robust transfer efficacy.

Technology Category

Application Category

📝 Abstract
We study how Bayesian optimization (BO) can be accelerated on a target task with historical knowledge transferred from related source tasks. Existing works on BO with knowledge transfer either do not have theoretical guarantees or achieve the same regret as BO in the non-transfer setting, $ ilde{mathcal{O}}(sqrt{T gamma_f})$, where $T$ is the number of evaluations of the target function and $gamma_f$ denotes its information gain. In this paper, we propose the DeltaBO algorithm, in which a novel uncertainty-quantification approach is built on the difference function $delta$ between the source and target functions, which are allowed to belong to different reproducing kernel Hilbert spaces (RKHSs). Under mild assumptions, we prove that the regret of DeltaBO is of order $ ilde{mathcal{O}}(sqrt{T (T/N + gamma_delta)})$, where $N$ denotes the number of evaluations from source tasks and typically $N gg T$. In many applications, source and target tasks are similar, which implies that $gamma_delta$ can be much smaller than $gamma_f$. Empirical studies on both real-world hyperparameter tuning tasks and synthetic functions show that DeltaBO outperforms other baseline methods and support our theoretical claims.
Problem

Research questions and friction points this paper is trying to address.

Accelerating Bayesian optimization through knowledge transfer from source tasks
Providing theoretical guarantees for transfer learning in Bayesian optimization
Quantifying uncertainty in function differences between source and target tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

DeltaBO algorithm transfers knowledge from source tasks
Novel uncertainty quantification using difference function delta
Regret bound improves with source task evaluations
🔎 Similar Papers
No similar papers found.
H
Haitao Lin
Department of Statistics, University of Chicago, Chicago, IL, USA
Boxin Zhao
Boxin Zhao
University of Chicago
Transfer LearningFederated LearningGraphical ModelsHigh-dimensional Statistics
Mladen Kolar
Mladen Kolar
University of Southern California
Machine learningStatistics
C
Chong Liu
Department of Computer Science, University at Albany, State University of New York, Albany, NY, USA