🤖 AI Summary
To address the reliance on manual annotation and domain-specific prior knowledge for semantic interpretability in reinforcement learning (RL), this paper proposes the first vision-language model (VLM)-driven, end-to-end interpretable RL framework. Our method leverages a pre-trained VLM to autonomously discover semantic concepts in unknown environments, employs a lightweight convolutional network for efficient feature extraction, and maps semantic features to transparent, verifiable action policies via an Interpretable Control Tree (ICT). Crucially, we introduce joint optimization of the VLM and ICT, eliminating the need for handcrafted concept definitions or labeled data. Evaluated across multiple unseen simulated environments, our approach improves average policy performance by 12.7%, enables natural-language verification of 92% of action decisions, and achieves a 47× speedup in inference latency compared to direct VLM prompting.
📝 Abstract
Semantic Interpretability in Reinforcement Learning (RL) enables transparency, accountability, and safer deployment by making the agent's decisions understandable and verifiable. Achieving this, however, requires a feature space composed of human-understandable concepts, which traditionally rely on human specification and fail to generalize to unseen environments. In this work, we introduce Semantically Interpretable Reinforcement Learning with Vision-Language Models Empowered Automation (SILVA), an automated framework that leverages pre-trained vision-language models (VLM) for semantic feature extraction and interpretable tree-based models for policy optimization. SILVA first queries a VLM to identify relevant semantic features for an unseen environment, then extracts these features from the environment. Finally, it trains an Interpretable Control Tree via RL, mapping the extracted features to actions in a transparent and interpretable manner. To address the computational inefficiency of extracting features directly with VLMs, we develop a feature extraction pipeline that generates a dataset for training a lightweight convolutional network, which is subsequently used during RL. By leveraging VLMs to automate tree-based RL, SILVA removes the reliance on human annotation previously required by interpretable models while also overcoming the inability of VLMs alone to generate valid robot policies, enabling semantically interpretable reinforcement learning without human-in-the-loop.