🤖 AI Summary
This work addresses the performance bottleneck posed by full grounding in classical planning, which suffers from exponential growth in the number of actions and atoms in large-scale tasks. To overcome this limitation, the paper introduces large language models (LLMs) into the partial grounding process for the first time. By leveraging the semantic structure of PDDL domain and problem files, the approach heuristically identifies and prunes irrelevant objects, actions, and predicates, enabling efficient partial grounding. This method transcends the constraints of traditional techniques based on dependency graphs or embeddings, achieving speedups of multiple orders of magnitude on seven challenging grounding benchmarks while maintaining comparable or even superior plan quality across several domains.
📝 Abstract
Grounding is a critical step in classical planning, yet it often becomes a computational bottleneck due to the exponential growth in grounded actions and atoms as task size increases. Recent advances in partial grounding have addressed this challenge by incrementally grounding only the most promising operators, guided by predictive models. However, these approaches primarily rely on relational features or learned embeddings and do not leverage the textual and structural cues present in PDDL descriptions. We propose SPG-LLM, which uses LLMs to analyze the domain and problem files to heuristically identify potentially irrelevant objects, actions, and predicates prior to grounding, significantly reducing the size of the grounded task. Across seven hard-to-ground benchmarks, SPG-LLM achieves faster grounding-often by orders of magnitude-while delivering comparable or better plan costs in some domains.