LLaMEA-BO: A Large Language Model Evolutionary Algorithm for Automatically Generating Bayesian Optimization Algorithms

📅 2025-05-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Bayesian optimization (BO) algorithms typically require extensive expert knowledge and labor-intensive manual design, hindering accessibility and scalability. Method: This paper introduces the first framework leveraging large language models (LLMs) to autonomously generate complete, executable BO algorithms. It innovatively couples LLMs with evolutionary algorithms—using prompt engineering (without fine-tuning) to guide the LLM in algorithmic structure generation, evaluation, crossover, and mutation. The framework produces end-to-end Python code. Contribution/Results: Evaluated on the COCO/BBOB benchmark (5D), our method outperforms 19 out of 24 state-of-the-art baselines. It further demonstrates strong cross-dimensional generalization (scaling effectively to higher dimensions) and cross-task transferability (validated on Bayesmark). All code is publicly released.

Technology Category

Application Category

📝 Abstract
Bayesian optimization (BO) is a powerful class of algorithms for optimizing expensive black-box functions, but designing effective BO algorithms remains a manual, expertise-driven task. Recent advancements in Large Language Models (LLMs) have opened new avenues for automating scientific discovery, including the automatic design of optimization algorithms. While prior work has used LLMs within optimization loops or to generate non-BO algorithms, we tackle a new challenge: Using LLMs to automatically generate full BO algorithm code. Our framework uses an evolution strategy to guide an LLM in generating Python code that preserves the key components of BO algorithms: An initial design, a surrogate model, and an acquisition function. The LLM is prompted to produce multiple candidate algorithms, which are evaluated on the established Black-Box Optimization Benchmarking (BBOB) test suite from the COmparing Continuous Optimizers (COCO) platform. Based on their performance, top candidates are selected, combined, and mutated via controlled prompt variations, enabling iterative refinement. Despite no additional fine-tuning, the LLM-generated algorithms outperform state-of-the-art BO baselines in 19 (out of 24) BBOB functions in dimension 5 and generalize well to higher dimensions, and different tasks (from the Bayesmark framework). This work demonstrates that LLMs can serve as algorithmic co-designers, offering a new paradigm for automating BO development and accelerating the discovery of novel algorithmic combinations. The source code is provided at https://github.com/Ewendawi/LLaMEA-BO.
Problem

Research questions and friction points this paper is trying to address.

Automating Bayesian Optimization algorithm design using LLMs
Generating full BO algorithm code via evolutionary LLM prompting
Improving optimization performance beyond manual BO designs
Innovation

Methods, ideas, or system contributions that make the work stand out.

Evolution strategy guides LLM for BO code
LLM generates multiple candidate BO algorithms
Iterative refinement via performance-based selection
🔎 Similar Papers
No similar papers found.
W
Wenhu Li
Leiden Institute of Advanced Computer Science, Leiden University, Leiden, The Netherlands
Niki van Stein
Niki van Stein
Leiden University
Explainable AIAutomated Algorithm DiscoveryDeeplearningbayesian optimization
T
T. Back
Leiden Institute of Advanced Computer Science, Leiden University, Leiden, The Netherlands
Elena Raponi
Elena Raponi
Leiden University (LIACS)
Bayesian OptimizationAlgorithm Selection and ConfigurationEngineering Design