🤖 AI Summary
This work addresses the explainability challenge for black-box agents (e.g., robots) in human–agent collaborative settings, where internal model access is unavailable and trustworthy natural-language behavioral explanations are required.
Method: We propose the first model-agnostic local surrogate modeling framework: it constructs lightweight, locally linear surrogate models from observed state–action sequences and leverages prompt-driven large language models (LLMs) for semantic explanation generation; behavior trajectory sampling and explanation-alignment training are introduced to mitigate LLM hallucination.
Contribution/Results: Experiments demonstrate significant improvements in explanation correctness and comprehensibility over baselines—validated by both LLM-based and human evaluations. A user study shows a 23.6% increase in human accuracy for predicting the agent’s subsequent actions. The framework establishes a novel paradigm for compliant, trustworthy human–agent collaboration.
📝 Abstract
Intelligent agents, such as robots, are increasingly deployed in real-world, human-centric environments. To foster appropriate human trust and meet legal and ethical standards, these agents must be able to explain their behavior. However, state-of-the-art agents are typically driven by black-box models like deep neural networks, limiting their interpretability. We propose a method for generating natural language explanations of agent behavior based only on observed states and actions -- without access to the agent's underlying model. Our approach learns a locally interpretable surrogate model of the agent's behavior from observations, which then guides a large language model to generate plausible explanations with minimal hallucination. Empirical results show that our method produces explanations that are more comprehensible and correct than those from baselines, as judged by both language models and human evaluators. Furthermore, we find that participants in a user study more accurately predicted the agent's future actions when given our explanations, suggesting improved understanding of agent behavior.