🤖 AI Summary
To address the lack of trust in Monte Carlo Tree Search (MCTS) for sequential planning—stemming from its poor interpretability—this paper introduces the first natural-language explanation framework integrating Computation Tree Logic (CTL) with large language models (LLMs). Our method dynamically encodes MCTS decision paths into CTL formulas and leverages these formal constraints to guide LLMs in generating post-hoc explanations that respect environmental dynamics and stochastic control constraints, while supporting open-ended queries and joint reasoning with MDP domain knowledge. The key innovation lies in the first deep joint modeling of CTL-based formal logic and LLMs, ensuring explanation verifiability, domain-knowledge alignment, and factual consistency. Quantitative evaluation demonstrates significant improvements in explanation accuracy, outperforming baselines in both logical fidelity and semantic consistency.
📝 Abstract
In response to the lack of trust in Artificial Intelligence (AI) for sequential planning, we design a Computational Tree Logic-guided large language model (LLM)-based natural language explanation framework designed for the Monte Carlo Tree Search (MCTS) algorithm. MCTS is often considered challenging to interpret due to the complexity of its search trees, but our framework is flexible enough to handle a wide range of free-form post-hoc queries and knowledge-based inquiries centered around MCTS and the Markov Decision Process (MDP) of the application domain. By transforming user queries into logic and variable statements, our framework ensures that the evidence obtained from the search tree remains factually consistent with the underlying environmental dynamics and any constraints in the actual stochastic control process. We evaluate the framework rigorously through quantitative assessments, where it demonstrates strong performance in terms of accuracy and factual consistency.