Improved Generalized Planning with LLMs through Strategy Refinement and Reflection

📅 2025-08-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the generalization failure of large language models (LLMs) in generating generalized plans for PDDL domains—often caused by erroneous initial strategies—this paper proposes a novel method integrating pseudocode-based strategy modeling, automated debugging, and reflective multi-program mutation selection. Our approach features three key contributions: (1) explicit strategy representation via executable pseudocode to proactively rectify logical errors; (2) a natural language inference–driven reflective prompting mechanism to enhance strategy consistency; and (3) systematic program mutation generation coupled with formal verification to select the optimal generalized plan. Evaluated on 17 standard benchmark domains, our method achieves significant improvements in planning accuracy and robustness, with zero performance degradation. Notably, for 12 domains, the best generated program solves all automatically instantiated tasks within the domain, demonstrating full-domain generalization capability.

Technology Category

Application Category

📝 Abstract
LLMs have recently been used to generate Python programs representing generalized plans in PDDL planning, i.e., plans that generalize across the tasks of a given PDDL domain. Previous work proposed a framework consisting of three steps: the LLM first generates a summary and then a strategy for the domain, both in natural language, and then implements that strategy as a Python program, that gets debugged on example planning tasks. In that work, only one strategy is generated and passed directly to the program generation. If the strategy is incorrect, its implementation will therefore result in an incorrect generalized plan. Here, we introduce an approach that generates the strategy in the form of pseudocode and enables automatic debugging of the pseudocode, hence allowing us to identify and fix errors prior to the generation of the generalized plan itself. Additionally, we extend the Python debugging phase with a reflection step prompting the LLM to pinpoint the reason for the observed plan failure. Finally, we take inspiration from LLM code generation to produce several program variants and pick the best one. Running experiments on 17 benchmark domains, we show that these extensions substantially improve (and never deteriorate) the quality of the generalized plans. In 12 of the domains, our best Python programs solve all tasks that can be generated with the respective instance generator.
Problem

Research questions and friction points this paper is trying to address.

Debugging incorrect pseudocode strategies before plan generation
Reflecting on plan failures to identify root causes
Generating multiple program variants to select the best
Innovation

Methods, ideas, or system contributions that make the work stand out.

Generates pseudocode strategy for debugging
Adds reflection step to pinpoint plan failures
Produces multiple program variants for selection
🔎 Similar Papers
No similar papers found.
K
Katharina Stein
Saarland Informatics Campus, Saarland University, Saarbrücken, Germany
N
Nils Hodel
Saarland Informatics Campus, Saarland University, Saarbrücken, Germany
Daniel Fišer
Daniel Fišer
Aalborg University
Artificial IntelligenceAutomated PlanningHeuristic Search
J
Jörg Hoffmann
Saarland Informatics Campus, Saarland University, Saarbrücken, Germany; German Research Center for Artificial Intelligence (DFKI), Saarbrücken, Germany
Michael Katz
Michael Katz
Principal Research Staff Member, IBM Research AI
Artificial IntelligenceAutomated PlanningCombinatorial SearchHeuristic Search
Alexander Koller
Alexander Koller
Professor of Computational Linguistics, Saarland University, Saarland Informatics Campus
Computational linguisticsartificial intelligence