🤖 AI Summary
This work addresses the growing complexity of reinforcement learning models, which often compromises interpretability, and the limitations of existing explanation methods that rely on handcrafted prototypes and struggle to balance interpretability with policy performance. To overcome these challenges, the paper proposes a novel prototype analysis framework that requires no expert knowledge and, for the first time in reinforcement learning, enables automatic selection of reference prototypes. The approach constructs Prototype-Wrapping Networks (PW-Nets) via principal prototype analysis on manifolds, adaptively identifying optimal prototypes to explain policy decisions. Experimental results on standard Gym environments demonstrate that the method preserves performance comparable to the original black-box policies while substantially enhancing interpretability, eliminating the need for manual intervention or domain-specific expertise.
📝 Abstract
Recent years have witnessed the widespread adoption of reinforcement learning (RL), from solving real-time games to fine-tuning large language models using human preference data significantly improving alignment with user expectations. However, as model complexity grows exponentially, the interpretability of these systems becomes increasingly challenging. While numerous explainability methods have been developed for computer vision and natural language processing to elucidate both local and global reasoning patterns, their application to RL remains limited. Direct extensions of these methods often struggle to maintain the delicate balance between interpretability and performance within RL settings. Prototype-Wrapper Networks (PW-Nets) have recently shown promise in bridging this gap by enhancing explainability in RL domains without sacrificing the efficiency of the original black-box models. However, these methods typically require manually defined reference prototypes, which often necessitate expert domain knowledge. In this work, we propose a method that removes this dependency by automatically selecting optimal prototypes from the available data. Preliminary experiments on standard Gym environments demonstrate that our approach matches the performance of existing PW-Nets, while remaining competitive with the original black-box models.