Classical Planning with LLM-Generated Heuristics: Challenging the State of the Art with Python Code

📅 2025-03-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) exhibit poor generalization in planning tasks, struggling to produce reliable plans—especially on out-of-distribution or scale-up instances. To address this, we propose LLM-Heuristic: a framework that leverages LLMs to directly generate executable and evaluable Python heuristic functions, integrated into a greedy best-first search planner. Unlike prior approaches, it requires no fine-tuning or complex prompt engineering; instead, it samples candidate heuristics from the LLM and automatically validates them via execution-based evaluation to select the optimal one. This is the first work to employ LLMs for end-to-end generation of domain-specific, executable, and empirically assessable heuristic code. Experiments across multiple classical planning domains demonstrate substantial improvements in unseen-task success rates. In several cases, our method expands fewer states than a highly optimized C++ baseline and matches the performance of state-of-the-art learning-based domain-specific planners—achieving both strong generalization and computational efficiency.

Technology Category

Application Category

📝 Abstract
In recent years, large language models (LLMs) have shown remarkable capabilities in various artificial intelligence problems. However, they fail to plan reliably, even when prompted with a detailed definition of the planning task. Attempts to improve their planning capabilities, such as chain-of-thought prompting, fine-tuning, and explicit "reasoning" still yield incorrect plans and usually fail to generalize to larger tasks. In this paper, we show how to use LLMs to generate correct plans, even for out-of-distribution tasks of increasing size. For a given planning domain, we ask an LLM to generate several domain-dependent heuristic functions in the form of Python code, evaluate them on a set of training tasks within a greedy best-first search, and choose the strongest one. The resulting LLM-generated heuristics solve many more unseen test tasks than state-of-the-art domain-independent heuristics for classical planning. They are even competitive with the strongest learning algorithm for domain-dependent planning. These findings are especially remarkable given that our proof-of-concept implementation is based on an unoptimized Python planner and the baselines all build upon highly optimized C++ code. In some domains, the LLM-generated heuristics expand fewer states than the baselines, revealing that they are not only efficiently computable, but sometimes even more informative than the state-of-the-art heuristics. Overall, our results show that sampling a set of planning heuristic function programs can significantly improve the planning capabilities of LLMs.
Problem

Research questions and friction points this paper is trying to address.

Improving LLM reliability in generating correct plans
Developing domain-dependent heuristics via Python code
Outperforming state-of-the-art domain-independent heuristics
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLM-generated domain-dependent heuristic functions
Greedy best-first search for heuristic evaluation
Python code implementation for planning heuristics
🔎 Similar Papers
No similar papers found.
A
Augusto B. Corrêa
University of Oxford
A
André G. Pereira
Federal University of Rio Grande do Sul
Jendrik Seipp
Jendrik Seipp
Senior Associate Professor, Linköping University
Artificial IntelligenceAutomated PlanningMachine LearningHeuristic Search