The 2025 Planning Performance of Frontier Large Language Models

📅 2025-11-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study systematically evaluates the end-to-end PDDL planning capabilities of state-of-the-art large language models (LLMs)—DeepSeek-R1, Gemini 2.5 Pro, and GPT-5—prior to 2025, benchmarking against the Learning Track subset of the International Planning Competition and the classical planner LAMA. Method: We adopt a direct prompting approach: LLMs receive raw PDDL domain and problem descriptions as input and are tasked with generating complete, executable action sequences without intermediate scaffolding or external tools. Contribution/Results: GPT-5 achieves solution quality nearly on par with LAMA on standard planning tasks and demonstrates superior robustness under semantic perturbations, substantially narrowing the performance gap between LLMs and symbolic planners. Overall, these frontier models exhibit a marked leap in planning competence over prior generations. To our knowledge, this is the first work to empirically validate, under a unified and rigorous benchmark, that contemporary LLMs possess near-symbolic-planner-level end-to-end PDDL planning capability—providing critical evidence for LLM-based autonomous agent decision-making.

Technology Category

Application Category

📝 Abstract
The capacity of Large Language Models (LLMs) for reasoning remains an active area of research, with the capabilities of frontier models continually advancing. We provide an updated evaluation of the end-to-end planning performance of three frontier LLMs as of 2025, where models are prompted to generate a plan from PDDL domain and task descriptions. We evaluate DeepSeek R1, Gemini 2.5 Pro, GPT-5 and as reference the planner LAMA on a subset of domains from the most recent Learning Track of the International Planning Competition. Our results show that on standard PDDL domains, the performance of GPT-5 in terms of solved tasks is competitive with LAMA. When the PDDL domains and tasks are obfuscated to test for pure reasoning, the performance of all LLMs degrades, though less severely than previously reported for other models. These results show substantial improvements over prior generations of LLMs, reducing the performance gap to planners on a challenging benchmark.
Problem

Research questions and friction points this paper is trying to address.

Evaluating end-to-end planning performance of frontier LLMs using PDDL
Testing LLM reasoning capabilities with obfuscated planning domains
Comparing GPT-5's planning performance against traditional planner LAMA
Innovation

Methods, ideas, or system contributions that make the work stand out.

Evaluating LLMs on PDDL planning tasks
Comparing GPT-5 performance with LAMA planner
Testing LLMs on obfuscated reasoning domains
🔎 Similar Papers
No similar papers found.
A
Augusto B. Corrêa
University of Oxford, United Kingdom
A
André G. Pereira
Federal University of Rio Grande do Sul, Brazil
Jendrik Seipp
Jendrik Seipp
Senior Associate Professor, Linköping University
Artificial IntelligenceAutomated PlanningMachine LearningHeuristic Search