CALM: Co-evolution of Algorithms and Language Model for Automatic Heuristic Design

📅 2025-05-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Conventional heuristic design for complex optimization problems heavily relies on expert knowledge and incurs high trial-and-error costs. Method: This paper proposes an automated heuristic design framework that synergistically co-evolves large language models (LLMs) and evolutionary search. It introduces a novel joint optimization mechanism integrating a quantized 7B LLM (INT4) with an evolutionary algorithm, combining reinforcement learning–driven parameter fine-tuning and hybrid prompt engineering to enable bidirectional alignment between model capabilities and heuristic generation. The framework supports local deployment on a single 24GB GPU. Contribution/Results: Experiments demonstrate state-of-the-art performance across multiple combinatorial optimization benchmarks—outperforming both prior automated methods and API-based LLM approaches relying solely on prompt engineering. It reduces computational overhead by 57% and minimizes human intervention to near zero, establishing a scalable, low-cost paradigm for automated algorithm design.

Technology Category

Application Category

📝 Abstract
Tackling complex optimization problems often relies on expert-designed heuristics, typically crafted through extensive trial and error. Recent advances demonstrate that large language models (LLMs), when integrated into well-designed evolutionary search frameworks, can autonomously discover high-performing heuristics at a fraction of the traditional cost. However, existing approaches predominantly rely on verbal guidance, i.e., manipulating the prompt generation process, to steer the evolution of heuristics, without adapting the underlying LLM. We propose a hybrid framework that combines verbal and numerical guidance, the latter achieved by fine-tuning the LLM via reinforcement learning based on the quality of generated heuristics. This joint optimization allows the LLM to co-evolve with the search process. Our method outperforms state-of-the-art (SOTA) baselines across various optimization tasks, running locally on a single 24GB GPU using a 7B model with INT4 quantization. It surpasses methods that rely solely on verbal guidance, even when those use significantly more powerful API-based models.
Problem

Research questions and friction points this paper is trying to address.

Automates heuristic design for optimization problems
Combines verbal and numerical guidance for LLM evolution
Outperforms SOTA baselines with efficient resource usage
Innovation

Methods, ideas, or system contributions that make the work stand out.

Combines verbal and numerical guidance for optimization
Fine-tunes LLM via reinforcement learning
Co-evolves LLM with heuristic search process
🔎 Similar Papers
No similar papers found.