🤖 AI Summary
This paper addresses the suboptimal synergy between large language models (LLMs) and evolutionary computation (EC). We propose the first bidirectional empowerment framework and co-evolutionary paradigm. Methodologically, we systematically integrate EC techniques—including genetic algorithms, evolution strategies, and Bayesian optimization—with LLM capabilities such as instruction tuning, chain-of-thought reasoning, and program synthesis. This enables EC to optimize LLM training, prompting, and architecture design, while LLMs enhance EC in algorithm design, hyperparameter optimization, and heuristic generation. Our contributions include: (i) formalizing cross-modal optimization pathways; (ii) surveying over 100 state-of-the-art works; (iii) empirically demonstrating that EC significantly improves LLM efficiency and robustness, whereas LLMs elevate EC’s automation level and interpretability; and (iv) revealing substantial synergistic gains on natural language understanding and complex search tasks. We further distill several key open challenges for future research.
📝 Abstract
Integrating Large Language Models (LLMs) and Evolutionary Computation (EC) represents a promising avenue for advancing artificial intelligence by combining powerful natural language understanding with optimization and search capabilities. This manuscript explores the synergistic potential of LLMs and EC, reviewing their intersections, complementary strengths, and emerging applications. We identify key opportunities where EC can enhance LLM training, fine-tuning, prompt engineering, and architecture search, while LLMs can, in turn, aid in automating the design, analysis, and interpretation of ECs. The manuscript explores the synergistic integration of EC and LLMs, highlighting their bidirectional contributions to advancing artificial intelligence. It first examines how EC techniques enhance LLMs by optimizing key components such as prompt engineering, hyperparameter tuning, and architecture search, demonstrating how evolutionary methods automate and refine these processes. Secondly, the survey investigates how LLMs improve EC by automating metaheuristic design, tuning evolutionary algorithms, and generating adaptive heuristics, thereby increasing efficiency and scalability. Emerging co-evolutionary frameworks are discussed, showcasing applications across diverse fields while acknowledging challenges like computational costs, interpretability, and algorithmic convergence. The survey concludes by identifying open research questions and advocating for hybrid approaches that combine the strengths of EC and LLMs.