🤖 AI Summary
This study addresses the limited interpretability of reinforcement learning (RL) policies by proposing an evolutionary optimization-based surrogate fitness framework. The method generates high-informativeness, diverse policy demonstrations via initial-state perturbations and jointly models local behavioral determinism, population-level diversity, and policy optimality—establishing a multidimensional evaluation suite comprising optimality gap, fidelity interquartile mean (IQM), fitness component analysis, and trajectory visualization. It constitutes the first systematic quantification and enhancement of RL policy interpretability. Experiments demonstrate that the framework significantly outperforms random and ablation baselines in discrete Gridworld environments. In continuous control tasks, it provides critical behavioral insights for early-stage policies while enabling high-fidelity refinement of mature policies. The approach thus bridges a key gap between policy performance and human-understandable behavioral rationale, advancing both interpretability assessment and optimization in deep RL.
📝 Abstract
We employ an evolutionary optimization framework that perturbs initial states to generate informative and diverse policy demonstrations. A joint surrogate fitness function guides the optimization by combining local diversity, behavioral certainty, and global population diversity. To assess demonstration quality, we apply a set of evaluation metrics, including the reward-based optimality gap, fidelity interquartile means (IQMs), fitness composition analysis, and trajectory visualizations. Hyperparameter sensitivity is also examined to better understand the dynamics of trajectory optimization. Our findings demonstrate that optimizing trajectory selection via surrogate fitness metrics significantly improves interpretability of RL policies in both discrete and continuous environments. In gridworld domains, evaluations reveal significantly enhanced demonstration fidelities compared to random and ablated baselines. In continuous control, the proposed framework offers valuable insights, particularly for early-stage policies, while fidelity-based optimization proves more effective for mature policies. By refining and systematically analyzing surrogate fitness functions, this study advances the interpretability of RL models. The proposed improvements provide deeper insights into RL decision-making, benefiting applications in safety-critical and explainability-focused domains.