Textual Explanations and Their Evaluations for Reinforcement Learning Policy

📅 2026-01-05
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limited interpretability of reinforcement learning (RL) policies and the lack of correctness guarantees and systematic evaluation in existing textual explanation methods. The authors propose an interpretable RL framework that leverages large language models and state clustering to generate initial textual explanations, enhances semantic accuracy through expert knowledge and automated predicate generation, and—critically—enables, for the first time, the translation of these textual explanations into verifiable, transparent rules. The framework incorporates two optimization mechanisms to mitigate explanatory conflicts and introduces the first quantitative evaluation benchmark specifically designed for textual explanations in RL. Experiments across three open-source environments and a real-world telecommunications industrial scenario demonstrate that the generated rules significantly outperform existing approaches in task performance while maintaining reproducibility and practical utility.

Technology Category

Application Category

📝 Abstract
Understanding a Reinforcement Learning (RL) policy is crucial for ensuring that autonomous agents behave according to human expectations. This goal can be achieved using Explainable Reinforcement Learning (XRL) techniques. Although textual explanations are easily understood by humans, ensuring their correctness remains a challenge, and evaluations in state-of-the-art remain limited. We present a novel XRL framework for generating textual explanations, converting them into a set of transparent rules, improving their quality, and evaluating them. Expert's knowledge can be incorporated into this framework, and an automatic predicate generator is also proposed to determine the semantic information of a state. Textual explanations are generated using a Large Language Model (LLM) and a clustering technique to identify frequent conditions. These conditions are then converted into rules to evaluate their properties, fidelity, and performance in the deployed environment. Two refinement techniques are proposed to improve the quality of explanations and reduce conflicting information. Experiments were conducted in three open-source environments to enable reproducibility, and in a telecom use case to evaluate the industrial applicability of the proposed XRL framework. This framework addresses the limitations of an existing method, Autonomous Policy Explanation, and the generated transparent rules can achieve satisfactory performance on certain tasks. This framework also enables a systematic and quantitative evaluation of textual explanations, providing valuable insights for the XRL field.
Problem

Research questions and friction points this paper is trying to address.

Explainable Reinforcement Learning
Textual Explanations
Policy Understanding
Explanation Evaluation
Autonomous Agents
Innovation

Methods, ideas, or system contributions that make the work stand out.

Explainable Reinforcement Learning
Textual Explanations
Rule Extraction
Large Language Model
Policy Evaluation
🔎 Similar Papers
No similar papers found.