Language Model as Planner and Formalizer under Constraints

📅 2025-10-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing LLM planning research relies on simplistic environmental benchmarks, leading to overestimated capabilities and obscured safety risks. This work introduces the first fine-grained, multi-category natural-language-constrained benchmark to systematically evaluate LLMs’ planning and formalization abilities under complex semantic constraints. Experiments span four datasets, four state-of-the-art reasoning models, three formal languages, and five methodological paradigms, revealing an average performance drop of ~50% under natural-language constraints. Our contributions are threefold: (1) the first planning evaluation benchmark explicitly designed for real-world constraints; (2) empirical evidence demonstrating pronounced fragility of current LLM planners under semantic complexity and lexical diversity; and (3) critical risk insights and actionable directions for improving safety-critical planning systems.

Technology Category

Application Category

📝 Abstract
LLMs have been widely used in planning, either as planners to generate action sequences end-to-end, or as formalizers to represent the planning domain and problem in a formal language that can derive plans deterministically. However, both lines of work rely on standard benchmarks that only include generic and simplistic environmental specifications, leading to potential overestimation of the planning ability of LLMs and safety concerns in downstream tasks. We bridge this gap by augmenting widely used planning benchmarks with manually annotated, fine-grained, and rich natural language constraints spanning four formally defined categories. Over 4 state-of-the-art reasoning LLMs, 3 formal languages, 5 methods, and 4 datasets, we show that the introduction of constraints not only consistently halves performance, but also significantly challenges robustness to problem complexity and lexical shift.
Problem

Research questions and friction points this paper is trying to address.

Augmenting planning benchmarks with annotated natural language constraints
Evaluating LLM planning robustness under complexity and lexical shifts
Addressing safety concerns in LLM planning through constraint integration
Innovation

Methods, ideas, or system contributions that make the work stand out.

Augmenting benchmarks with annotated natural language constraints
Evaluating LLMs across multiple formal languages and datasets
Introducing constraints to challenge performance and robustness
🔎 Similar Papers
No similar papers found.
C
Cassie Huang
Drexel University
S
Stuti Mohan
Brown University
Z
Ziyi Yang
Stefanie Tellex
Stefanie Tellex
Brown University
RoboticsNatural LanguageArtificial Intelligence
L
Li Zhang
Drexel University