Can Large Language Models Be Trusted as Black-Box Evolutionary Optimizers for Combinatorial Problems?

📅 2025-01-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Traditional individual-level LLM invocation for combinatorial optimization suffers from low computational efficiency and insufficient output fidelity, limiting its reliability as a black-box evolutionary optimizer. Method: We propose a population-level LLM optimization paradigm featuring multi-stage prompt engineering, an error-detection-and-correction feedback loop, and a quantitative fidelity metric. We systematically evaluate LLMs’ performance in evolutionary operators—selection, mutation, and crossover—and integrate a robust error-correction mechanism to enhance operational stability. Contribution/Results: Experiments across multiple canonical combinatorial optimization benchmarks demonstrate significant improvements in both solution quality and computational efficiency. Our approach validates the feasibility of deploying LLMs as stable, high-fidelity evolutionary operators, establishing a novel, general-purpose LLM-driven optimization paradigm with reproducible technical implementation.

Technology Category

Application Category

📝 Abstract
Evolutionary computation excels in complex optimization but demands deep domain knowledge, restricting its accessibility. Large Language Models (LLMs) offer a game-changing solution with their extensive knowledge and could democratize the optimization paradigm. Although LLMs possess significant capabilities, they may not be universally effective, particularly since evolutionary optimization encompasses multiple stages. It is therefore imperative to evaluate the suitability of LLMs as evolutionary optimizer (EVO). Thus, we establish a series of rigid standards to thoroughly examine the fidelity of LLM-based EVO output in different stages of evolutionary optimization and then introduce a robust error-correction mechanism to mitigate the output uncertainty. Furthermore, we explore a cost-efficient method that directly operates on entire populations with excellent effectiveness in contrast to individual-level optimization. Through extensive experiments, we rigorously validate the performance of LLMs as operators targeted for combinatorial problems. Our findings provide critical insights and valuable observations, advancing the understanding and application of LLM-based optimization.
Problem

Research questions and friction points this paper is trying to address.

Large Language Models
Combinatorial Optimization
Evolutionary Optimization
Innovation

Methods, ideas, or system contributions that make the work stand out.

Large Language Models
Evolutionary Optimization
Batch Processing Method
🔎 Similar Papers
No similar papers found.