🤖 AI Summary
This paper addresses sequential multi-task black-box optimization—arising in logistics scheduling and engineering design—where tasks share structural similarities but exhibit distinct objectives and constraints. Method: We propose an efficient framework that jointly optimizes per-task accuracy and cross-task knowledge transfer. Its core innovation is the first formulation of a transferable maximum entropy search (TMES) acquisition criterion, integrated with particle-based variational Bayesian inference to enable adaptive parameter distribution transfer across tasks. Furthermore, we unify multi-fidelity Bayesian optimization with Gaussian process surrogate modeling to jointly optimize immediate performance and long-term generalizability under evaluation cost constraints. Contribution/Results: We theoretically establish tighter expected regret bounds for TMES. Empirical evaluation on synthetic and real-world benchmarks demonstrates that our method achieves significant improvements over state-of-the-art baselines using only a few tasks, substantially enhancing overall optimization efficiency.
📝 Abstract
In many applications, ranging from logistics to engineering, a designer is faced with a sequence of optimization tasks for which the objectives are in the form of black-box functions that are costly to evaluate. Furthermore, higher-fidelity evaluations of the optimization objectives often entail a larger cost. Existing multi-fidelity black-box optimization strategies select candidate solutions and fidelity levels with the goal of maximizing the information about the optimal value or the optimal solution for the current task. Assuming that successive optimization tasks are related, this paper introduces a novel information-theoretic acquisition function that balances the need to acquire information about the current task with the goal of collecting information transferable to future tasks. The proposed method transfers across tasks distributions over parameters of a Gaussian process surrogate model by implementing particle-based variational Bayesian updates. Theoretical insights based on the analysis of the expected regret substantiate the benefits of acquiring transferable knowledge across tasks. Furthermore, experimental results across synthetic and real-world examples reveal that the proposed acquisition strategy that caters to future tasks can significantly improve the optimization efficiency as soon as a sufficient number of tasks is processed.