Evolutionary Computation and Large Language Models: A Survey of Methods, Synergies, and Applications

📅 2025-05-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses the suboptimal synergy between large language models (LLMs) and evolutionary computation (EC). We propose the first bidirectional empowerment framework and co-evolutionary paradigm. Methodologically, we systematically integrate EC techniques—including genetic algorithms, evolution strategies, and Bayesian optimization—with LLM capabilities such as instruction tuning, chain-of-thought reasoning, and program synthesis. This enables EC to optimize LLM training, prompting, and architecture design, while LLMs enhance EC in algorithm design, hyperparameter optimization, and heuristic generation. Our contributions include: (i) formalizing cross-modal optimization pathways; (ii) surveying over 100 state-of-the-art works; (iii) empirically demonstrating that EC significantly improves LLM efficiency and robustness, whereas LLMs elevate EC’s automation level and interpretability; and (iv) revealing substantial synergistic gains on natural language understanding and complex search tasks. We further distill several key open challenges for future research.

Technology Category

Application Category

📝 Abstract
Integrating Large Language Models (LLMs) and Evolutionary Computation (EC) represents a promising avenue for advancing artificial intelligence by combining powerful natural language understanding with optimization and search capabilities. This manuscript explores the synergistic potential of LLMs and EC, reviewing their intersections, complementary strengths, and emerging applications. We identify key opportunities where EC can enhance LLM training, fine-tuning, prompt engineering, and architecture search, while LLMs can, in turn, aid in automating the design, analysis, and interpretation of ECs. The manuscript explores the synergistic integration of EC and LLMs, highlighting their bidirectional contributions to advancing artificial intelligence. It first examines how EC techniques enhance LLMs by optimizing key components such as prompt engineering, hyperparameter tuning, and architecture search, demonstrating how evolutionary methods automate and refine these processes. Secondly, the survey investigates how LLMs improve EC by automating metaheuristic design, tuning evolutionary algorithms, and generating adaptive heuristics, thereby increasing efficiency and scalability. Emerging co-evolutionary frameworks are discussed, showcasing applications across diverse fields while acknowledging challenges like computational costs, interpretability, and algorithmic convergence. The survey concludes by identifying open research questions and advocating for hybrid approaches that combine the strengths of EC and LLMs.
Problem

Research questions and friction points this paper is trying to address.

Exploring synergies between Evolutionary Computation and Large Language Models
Enhancing LLM training and EC design through bidirectional integration
Addressing challenges in computational costs and algorithm convergence
Innovation

Methods, ideas, or system contributions that make the work stand out.

EC enhances LLMs via prompt and architecture optimization
LLMs automate EC metaheuristic design and tuning
Hybrid EC-LLM frameworks boost AI efficiency
🔎 Similar Papers
No similar papers found.
D
Dikshit Chauhan
Department of Electrical and Computer Engineering, National University of Singapore, 119077
Bapi Dutta
Bapi Dutta
National University of Singapore
Computational IntelligenceOperations Research
I
Indu Bala
School of Computer and Mathematical Sciences, University of Adelaide - 5005, Australia
Niki van Stein
Niki van Stein
Leiden University
Explainable AIAutomated Algorithm DiscoveryDeeplearningbayesian optimization
T
Thomas Back
Leiden Institute of Advanced Computer Science, University Leiden, Leiden, Netherlands
A
Anupam Yadav
Department of Mathematics and Computing, Dr. B. R. Ambedkar National Institute of Technology, Jalandhar - 144011, INDIA