Efficient Heuristics Generation for Solving Combinatorial Optimization Problems Using Large Language Models

📅 2025-05-19
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
To address two key bottlenecks in leveraging large language models (LLMs) for combinatorial optimization problems (COPs)—weak generalization of search direction and high evaluation overhead—this paper proposes a dual-mechanism framework: Core Abstraction Prompting (CAP) and Prototypical Performance Prediction Prompting (PPP). CAP introduces the first abstraction of elite heuristics’ core components into transferable prompt priors; PPP enables zero- or few-shot heuristic performance prediction via semantic similarity, augmented by reliability discrimination and accuracy enhancement modules. Integrated with theoretically grounded prompt optimization, heuristic semantic equivalence identification, and a multi-LLM collaborative evaluation framework, our approach achieves state-of-the-art performance across four COP task categories, five canonical problems, and eight LLMs. Hercules-P, our implementation, significantly reduces evaluation cost. Ablation studies confirm the effectiveness of each component.

Technology Category

Application Category

📝 Abstract
Recent studies exploited Large Language Models (LLMs) to autonomously generate heuristics for solving Combinatorial Optimization Problems (COPs), by prompting LLMs to first provide search directions and then derive heuristics accordingly. However, the absence of task-specific knowledge in prompts often leads LLMs to provide unspecific search directions, obstructing the derivation of well-performing heuristics. Moreover, evaluating the derived heuristics remains resource-intensive, especially for those semantically equivalent ones, often requiring omissible resource expenditure. To enable LLMs to provide specific search directions, we propose the Hercules algorithm, which leverages our designed Core Abstraction Prompting (CAP) method to abstract the core components from elite heuristics and incorporate them as prior knowledge in prompts. We theoretically prove the effectiveness of CAP in reducing unspecificity and provide empirical results in this work. To reduce computing resources required for evaluating the derived heuristics, we propose few-shot Performance Prediction Prompting (PPP), a first-of-its-kind method for the Heuristic Generation (HG) task. PPP leverages LLMs to predict the fitness values of newly derived heuristics by analyzing their semantic similarity to previously evaluated ones. We further develop two tailored mechanisms for PPP to enhance predictive accuracy and determine unreliable predictions, respectively. The use of PPP makes Hercules more resource-efficient and we name this variant Hercules-P. Extensive experiments across four HG tasks, five COPs, and eight LLMs demonstrate that Hercules outperforms the state-of-the-art LLM-based HG algorithms, while Hercules-P excels at minimizing required computing resources. In addition, we illustrate the effectiveness of CAP, PPP, and the other proposed mechanisms by conducting relevant ablation studies.
Problem

Research questions and friction points this paper is trying to address.

Generating specific heuristics for combinatorial optimization using LLMs
Reducing resource-intensive heuristic evaluation in optimization tasks
Improving accuracy and efficiency of LLM-based heuristic generation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Core Abstraction Prompting (CAP) for specific search directions
Few-shot Performance Prediction Prompting (PPP) for resource efficiency
Hercules algorithm combining CAP and PPP for optimization
🔎 Similar Papers
No similar papers found.
X
Xuan Wu
College of Computer Science and Technology, Jilin University, Changchun, Jilin, China
D
Di Wang
LILY Research Centre, Nanyang Technological University, Singapore
C
Chunguo Wu
Key Laboratory of Symbolic Computation and Knowledge Engineering of Ministry of Education, Jilin University, Changchun, Jilin, China
L
Lijie Wen
School of Software, Tsinghua University, Beijing, China
Chunyan Miao
Chunyan Miao
Nanyang Technological University
human agent interactionhuman computationcognitive agentsincentivesserious games
Yubin Xiao
Yubin Xiao
Jilin University
Neural Combinatorial optimization
Y
You Zhou
College of Software, Jilin University, Changchun, Jilin, China