🤖 AI Summary
Manual MDP modeling in robotic probabilistic planning suffers from poor scalability and heavy reliance on domain expertise. Method: This paper proposes the first end-to-end framework integrating large language models (LLMs), logic programming, and formal verification to automatically synthesize verifiable MDPs from natural-language task descriptions. It leverages LLMs to extract structured Prolog knowledge bases, combines reachability analysis with the Storm model checker to compute optimal policies, and employs formal verification to guarantee policy safety and correctness. Contribution/Results: The framework achieves the first closed-loop, LLM-driven translation from unstructured text to formally verified MDPs, substantially lowering the modeling barrier. Evaluated across three human-robot interaction scenarios, it successfully generated, verified, and executed correct policies—demonstrating effectiveness, practicality, and scalability.
📝 Abstract
We present a novel framework that integrates Large Language Models (LLMs) with automated planning and formal verification to streamline the creation and use of Markov Decision Processes (MDP). Our system leverages LLMs to extract structured knowledge in the form of a Prolog knowledge base from natural language (NL) descriptions. It then automatically constructs an MDP through reachability analysis, and synthesises optimal policies using the Storm model checker. The resulting policy is exported as a state-action table for execution. We validate the framework in three human-robot interaction scenarios, demonstrating its ability to produce executable policies with minimal manual effort. This work highlights the potential of combining language models with formal methods to enable more accessible and scalable probabilistic planning in robotics.