How Far Are LLMs from Symbolic Planners? An NLP-Based Perspective

📅 2025-08-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work systematically evaluates large language models (LLMs) on symbolic planning tasks, revealing pervasive issues—including action hallucination, logical inconsistencies, and low executability—causing them to significantly underperform classical symbolic planners. Method: We propose the first NLP-informed framework for enhancing LLM planning reliability: a three-stage, NLP-driven repair pipeline comprising syntactic validation, semantic alignment, and action normalization, tightly coupled with a symbolic planner for plan completion and feasibility verification. Contribution/Results: Our approach establishes the first NLP-grounded evaluation paradigm for LLM planning capability; introduces a lightweight, NLP–symbolic reasoning co-designed recovery mechanism; and bridges the reliability gap between LLMs and classical planners. Experiments show that it improves average executable actions from 2.65 to an overall success rate of 27.5%, markedly enhancing planning quality and robustness.

Technology Category

Application Category

📝 Abstract
The reasoning and planning abilities of Large Language Models (LLMs) have been a frequent topic of discussion in recent years. Their ability to take unstructured planning problems as input has made LLMs' integration into AI planning an area of interest. Nevertheless, LLMs are still not reliable as planners, with the generated plans often containing mistaken or hallucinated actions. Existing benchmarking and evaluation methods investigate planning with LLMs, focusing primarily on success rate as a quality indicator in various planning tasks, such as validating plans or planning in relaxed conditions. In this paper, we approach planning with LLMs as a natural language processing (NLP) task, given that LLMs are NLP models themselves. We propose a recovery pipeline consisting of an NLP-based evaluation of the generated plans, along with three stages to recover the plans through NLP manipulation of the LLM-generated plans, and eventually complete the plan using a symbolic planner. This pipeline provides a holistic analysis of LLM capabilities in the context of AI task planning, enabling a broader understanding of the quality of invalid plans. Our findings reveal no clear evidence of underlying reasoning during plan generation, and that a pipeline comprising an NLP-based analysis of the plans, followed by a recovery mechanism, still falls short of the quality and reliability of classical planners. On average, only the first 2.65 actions of the plan are executable, with the average length of symbolically generated plans being 8.4 actions. The pipeline still improves action quality and increases the overall success rate from 21.9% to 27.5%.
Problem

Research questions and friction points this paper is trying to address.

Evaluating LLMs' unreliable planning abilities via NLP
Proposing NLP-based recovery for flawed LLM-generated plans
Comparing LLM and symbolic planners' plan quality
Innovation

Methods, ideas, or system contributions that make the work stand out.

NLP-based evaluation of LLM-generated plans
Three-stage NLP recovery pipeline
Integration with symbolic planner
🔎 Similar Papers
No similar papers found.