🤖 AI Summary
In large-scale multi-task Bayesian optimization (≈2000 tasks), existing approaches—such as multi-task Gaussian processes and deep kernel transfer—exhibit limited knowledge transfer efficiency and marginal performance gains. Method: This paper introduces, for the first time, large language models (LLMs) into this paradigm, proposing an LLM-driven iterative transfer framework. It fine-tunes an LLM on high-quality optimization trajectories from historical tasks to generate high-potential initial points for new tasks; integrates Bayesian optimization feedback to continuously update the LLM, establishing a closed-loop cycle: trajectory encoding → LLM-based inference → Bayesian optimization execution → data refilling. Results: Evaluated on real-world antimicrobial peptide design, the method achieves significant reductions in oracle calls using only few-shot samples, outperforming standard zero-shot Bayesian optimization in both solution quality and convergence speed—effectively unifying multi-task meta-learning with generative modeling.
📝 Abstract
In multi-task Bayesian optimization, the goal is to leverage experience from optimizing existing tasks to improve the efficiency of optimizing new ones. While approaches using multi-task Gaussian processes or deep kernel transfer exist, the performance improvement is marginal when scaling to more than a moderate number of tasks. We introduce a novel approach leveraging large language models (LLMs) to learn from, and improve upon, previous optimization trajectories, scaling to approximately 2000 distinct tasks. Specifically, we propose an iterative framework in which an LLM is fine-tuned using the high quality solutions produced by BayesOpt to generate improved initializations that accelerate convergence for future optimization tasks based on previous search trajectories. We evaluate our method on two distinct domains: database query optimization and antimicrobial peptide design. Results demonstrate that our approach creates a positive feedback loop, where the LLM's generated initializations gradually improve, leading to better optimization performance. As this feedback loop continues, we find that the LLM is eventually able to generate solutions to new tasks in just a few shots that are better than the solutions produced by"from scratch"by Bayesian optimization while simultaneously requiring significantly fewer oracle calls.