Verifying Memoryless Sequential Decision-making of Large Language Models

📅 2025-10-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the safety verification challenge for large language models (LLMs) in memoryless sequential decision-making tasks. Methodologically, it embeds LLM policies into a Markov decision process (MDP) framework, where natural language prompts encode states and parse action outputs; the reachable state space is incrementally constructed and verified against PCTL safety properties using the Storm model checker. A key innovation is native integration with Ollama, enabling direct execution of PRISM specifications. Experiments on standard grid-world benchmarks validate the formal verifiability of open-source LLM policies, though their performance remains below that of deep reinforcement learning baselines. This is the first work to systematically subject LLM-based decision policies to rigorous, automated formal verification—thereby substantially enhancing their trustworthiness and interpretability.

Technology Category

Application Category

📝 Abstract
We introduce a tool for rigorous and automated verification of large language model (LLM)- based policies in memoryless sequential decision-making tasks. Given a Markov decision process (MDP) representing the sequential decision-making task, an LLM policy, and a safety requirement expressed as a PCTL formula, our approach incrementally constructs only the reachable portion of the MDP guided by the LLM's chosen actions. Each state is encoded as a natural language prompt, the LLM's response is parsed into an action, and reachable successor states by the policy are expanded. The resulting formal model is checked with Storm to determine whether the policy satisfies the specified safety property. In experiments on standard grid world benchmarks, we show that open source LLMs accessed via Ollama can be verified when deterministically seeded, but generally underperform deep reinforcement learning baselines. Our tool natively integrates with Ollama and supports PRISM-specified tasks, enabling continuous benchmarking in user-specified sequential decision-making tasks and laying a practical foundation for formally verifying increasingly capable LLMs.
Problem

Research questions and friction points this paper is trying to address.

Verifying LLM policies in memoryless sequential decision-making tasks
Automated safety verification using PCTL formulas and MDPs
Benchmarking LLM performance against reinforcement learning baselines
Innovation

Methods, ideas, or system contributions that make the work stand out.

Automated verification tool for LLM-based policies
Incrementally constructs reachable MDP states from LLM actions
Integrates with Storm model checker for safety validation
🔎 Similar Papers
No similar papers found.