🤖 AI Summary
Emerging combinatorial optimization problems—such as unit-load pre-grouping—lack efficient, domain-specific heuristic algorithms. Method: We propose Contextualized Evolutionary Heuristics (CEoH), a framework leveraging large language models (LLMs) to automatically generate high-quality, interpretable heuristic rules. CEoH innovatively integrates problem-specific descriptions into prompt engineering and combines in-context learning with evolutionary prompt optimization, significantly enhancing the stability and generalizability of rules generated by small-scale LLMs (e.g., 7B-parameter models). Contribution/Results: Experiments demonstrate that CEoH-generated heuristics consistently outperform manually designed baselines across diverse problem instances. Moreover, the heuristics generalize seamlessly to varying problem scales and configurations. This validates the effectiveness and practicality of LLM-driven automated heuristic discovery for combinatorial optimization.
📝 Abstract
Combinatorial optimization problems often rely on heuristic algorithms to generate efficient solutions. However, the manual design of heuristics is resource-intensive and constrained by the designer's expertise. Recent advances in artificial intelligence, particularly large language models (LLMs), have demonstrated the potential to automate heuristic generation through evolutionary frameworks. Recent works focus only on well-known combinatorial optimization problems like the traveling salesman problem and online bin packing problem when designing constructive heuristics. This study investigates whether LLMs can effectively generate heuristics for niche, not yet broadly researched optimization problems, using the unit-load pre-marshalling problem as an example case. We propose the Contextual Evolution of Heuristics (CEoH) framework, an extension of the Evolution of Heuristics (EoH) framework, which incorporates problem-specific descriptions to enhance in-context learning during heuristic generation. Through computational experiments, we evaluate CEoH and EoH and compare the results. Results indicate that CEoH enables smaller LLMs to generate high-quality heuristics more consistently and even outperform larger models. Larger models demonstrate robust performance with or without contextualized prompts. The generated heuristics exhibit scalability to diverse instance configurations.