Model-Agnostic Policy Explanations with Large Language Models

📅 2025-04-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the explainability challenge for black-box agents (e.g., robots) in human–agent collaborative settings, where internal model access is unavailable and trustworthy natural-language behavioral explanations are required. Method: We propose the first model-agnostic local surrogate modeling framework: it constructs lightweight, locally linear surrogate models from observed state–action sequences and leverages prompt-driven large language models (LLMs) for semantic explanation generation; behavior trajectory sampling and explanation-alignment training are introduced to mitigate LLM hallucination. Contribution/Results: Experiments demonstrate significant improvements in explanation correctness and comprehensibility over baselines—validated by both LLM-based and human evaluations. A user study shows a 23.6% increase in human accuracy for predicting the agent’s subsequent actions. The framework establishes a novel paradigm for compliant, trustworthy human–agent collaboration.

Technology Category

Application Category

📝 Abstract
Intelligent agents, such as robots, are increasingly deployed in real-world, human-centric environments. To foster appropriate human trust and meet legal and ethical standards, these agents must be able to explain their behavior. However, state-of-the-art agents are typically driven by black-box models like deep neural networks, limiting their interpretability. We propose a method for generating natural language explanations of agent behavior based only on observed states and actions -- without access to the agent's underlying model. Our approach learns a locally interpretable surrogate model of the agent's behavior from observations, which then guides a large language model to generate plausible explanations with minimal hallucination. Empirical results show that our method produces explanations that are more comprehensible and correct than those from baselines, as judged by both language models and human evaluators. Furthermore, we find that participants in a user study more accurately predicted the agent's future actions when given our explanations, suggesting improved understanding of agent behavior.
Problem

Research questions and friction points this paper is trying to address.

Explaining black-box agent behavior without model access
Generating natural language explanations from observed states
Improving human understanding of agent actions via explanations
Innovation

Methods, ideas, or system contributions that make the work stand out.

Model-agnostic policy explanations via observations
Interpretable surrogate model guides LLM generation
Reduces hallucination in natural language explanations
🔎 Similar Papers
No similar papers found.
X
Xi-Jia Zhang
Georgia Institute of Technology, Atlanta, GA, USA
Y
Yue Guo
Carnegie Mellon University, Pittsburgh, PA, USA
S
Shufei Chen
Georgia Institute of Technology, Atlanta, GA, USA
Simon Stepputtis
Simon Stepputtis
Virginia Tech
Artificial IntelligenceNatural Language ProcessingRoboticsHuman-Robot Interaction
M
Matthew Gombolay
Georgia Institute of Technology, Atlanta, GA, USA
Katia Sycara
Katia Sycara
Professor School of Computer Science, Carnegie Mellon University
Artificial IntelligenceMulti-Robot SystemsHuman Robot InteractionMulti-Agent SystemsSemantic Web
Joseph Campbell
Joseph Campbell
Purdue University
Machine LearningRoboticsExplainable AI