Combining LLMs with Logic-Based Framework to Explain MCTS

📅 2025-05-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the lack of trust in Monte Carlo Tree Search (MCTS) for sequential planning—stemming from its poor interpretability—this paper introduces the first natural-language explanation framework integrating Computation Tree Logic (CTL) with large language models (LLMs). Our method dynamically encodes MCTS decision paths into CTL formulas and leverages these formal constraints to guide LLMs in generating post-hoc explanations that respect environmental dynamics and stochastic control constraints, while supporting open-ended queries and joint reasoning with MDP domain knowledge. The key innovation lies in the first deep joint modeling of CTL-based formal logic and LLMs, ensuring explanation verifiability, domain-knowledge alignment, and factual consistency. Quantitative evaluation demonstrates significant improvements in explanation accuracy, outperforming baselines in both logical fidelity and semantic consistency.

Technology Category

Application Category

📝 Abstract
In response to the lack of trust in Artificial Intelligence (AI) for sequential planning, we design a Computational Tree Logic-guided large language model (LLM)-based natural language explanation framework designed for the Monte Carlo Tree Search (MCTS) algorithm. MCTS is often considered challenging to interpret due to the complexity of its search trees, but our framework is flexible enough to handle a wide range of free-form post-hoc queries and knowledge-based inquiries centered around MCTS and the Markov Decision Process (MDP) of the application domain. By transforming user queries into logic and variable statements, our framework ensures that the evidence obtained from the search tree remains factually consistent with the underlying environmental dynamics and any constraints in the actual stochastic control process. We evaluate the framework rigorously through quantitative assessments, where it demonstrates strong performance in terms of accuracy and factual consistency.
Problem

Research questions and friction points this paper is trying to address.

Enhancing trust in AI for sequential planning via explainable MCTS
Providing natural language explanations for complex MCTS search trees
Ensuring factual consistency in explanations with environmental dynamics
Innovation

Methods, ideas, or system contributions that make the work stand out.

Combines LLMs with logic-based framework for explanations
Transforms user queries into logic statements for consistency
Handles free-form queries about MCTS and MDP