🤖 AI Summary
This study systematically evaluates the end-to-end PDDL planning capabilities of state-of-the-art large language models (LLMs)—DeepSeek-R1, Gemini 2.5 Pro, and GPT-5—prior to 2025, benchmarking against the Learning Track subset of the International Planning Competition and the classical planner LAMA.
Method: We adopt a direct prompting approach: LLMs receive raw PDDL domain and problem descriptions as input and are tasked with generating complete, executable action sequences without intermediate scaffolding or external tools.
Contribution/Results: GPT-5 achieves solution quality nearly on par with LAMA on standard planning tasks and demonstrates superior robustness under semantic perturbations, substantially narrowing the performance gap between LLMs and symbolic planners. Overall, these frontier models exhibit a marked leap in planning competence over prior generations. To our knowledge, this is the first work to empirically validate, under a unified and rigorous benchmark, that contemporary LLMs possess near-symbolic-planner-level end-to-end PDDL planning capability—providing critical evidence for LLM-based autonomous agent decision-making.
📝 Abstract
The capacity of Large Language Models (LLMs) for reasoning remains an active area of research, with the capabilities of frontier models continually advancing. We provide an updated evaluation of the end-to-end planning performance of three frontier LLMs as of 2025, where models are prompted to generate a plan from PDDL domain and task descriptions. We evaluate DeepSeek R1, Gemini 2.5 Pro, GPT-5 and as reference the planner LAMA on a subset of domains from the most recent Learning Track of the International Planning Competition. Our results show that on standard PDDL domains, the performance of GPT-5 in terms of solved tasks is competitive with LAMA. When the PDDL domains and tasks are obfuscated to test for pure reasoning, the performance of all LLMs degrades, though less severely than previously reported for other models. These results show substantial improvements over prior generations of LLMs, reducing the performance gap to planners on a challenging benchmark.