BASIL: Best-Action Symbolic Interpretable Learning for Evolving Compact RL Policies

📅 2025-05-31
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the lack of interpretability and verifiability of deep reinforcement learning (DRL) policies in safety-critical applications, this paper proposes an online evolutionary framework that jointly optimizes behavioral diversity and policy simplicity via quality-diversity (QD) optimization and complexity-aware adaptive constraints. The method employs symbolic predicate logic to represent policies, enabling symbolic abstraction of state variables and online synthesis of compact, human-readable rule-based controllers. It introduces— for the first time—precise control over the number of logical rules and supports dynamic online evolution. Evaluated on CartPole, MountainCar, and Acrobot benchmarks, the generated policies average fewer than 15 rules while matching the performance of DRL baselines. This work achieves, for the first time, a competitive balance between strong interpretability and task performance in safety-critical RL settings.

Technology Category

Application Category

📝 Abstract
The quest for interpretable reinforcement learning is a grand challenge for the deployment of autonomous decision-making systems in safety-critical applications. Modern deep reinforcement learning approaches, while powerful, tend to produce opaque policies that compromise verification, reduce transparency, and impede human oversight. To address this, we introduce BASIL (Best-Action Symbolic Interpretable Learning), a systematic approach for generating symbolic, rule-based policies via online evolutionary search with quality-diversity (QD) optimization. BASIL represents policies as ordered lists of symbolic predicates over state variables, ensuring full interpretability and tractable policy complexity. By using a QD archive, the methodology in the proposed study encourages behavioral and structural diversity between top-performing solutions, while a complexity-aware fitness encourages the synthesis of compact representations. The evolutionary system supports the use of exact constraints for rule count and system adaptability for balancing transparency with expressiveness. Empirical comparisons with three benchmark tasks CartPole-v1, MountainCar-v0, and Acrobot-v1 show that BASIL consistently synthesizes interpretable controllers with compact representations comparable to deep reinforcement learning baselines. Herein, this article introduces a new interpretable policy synthesis method that combines symbolic expressiveness, evolutionary diversity, and online learning through a unifying framework.
Problem

Research questions and friction points this paper is trying to address.

Develops interpretable reinforcement learning policies for safety-critical applications
Addresses opacity in deep RL policies via symbolic rule-based solutions
Balances policy complexity and expressiveness using evolutionary diversity
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses symbolic predicates for interpretable policies
Employs quality-diversity evolutionary optimization
Balances complexity with compact rule-based representations
🔎 Similar Papers
No similar papers found.