🤖 AI Summary
This work proposes a novel approach that leverages large language models (LLMs) to automatically generate qualitative numerical planning (QNP) abstractions for generalized planning. To address potential errors in LLM-generated abstractions, the authors design specialized prompting protocols that guide the model to extract relevant abstraction features from domain knowledge and training tasks, and to formally encode initial states, actions, and goals as QNP problems. An automated debugging mechanism is further introduced to detect and correct abstraction errors. This study represents the first application of LLMs to QNP abstraction generation. Experimental results demonstrate that, under the guidance of the debugging mechanism, certain LLMs can produce correct and effective QNP abstractions, substantially enhancing their utility in generalized planning.
📝 Abstract
Qualitative Numerical Planning (QNP) serves as an important abstraction model for generalized planning (GP), which aims to compute general plans that solve multiple instances at once. Recent works show that large language models (LLMs) can function as generalized planners. This work investigates whether LLMs can serve as QNP abstraction generators for GP problems and how to fix abstractions via automated debugging. We propose a prompt protocol: input a GP domain and training tasks to LLMs, prompting them to generate abstract features and further abstract the initial state, action set, and goal into QNP problems. An automated debugging method is designed to detect abstraction errors, guiding LLMs to fix abstractions. Experiments demonstrate that under properly guided by automated debugging, some LLMs can generate useful QNP abstractions.