Neural DNF-MT: A Neuro-symbolic Approach for Learning Interpretable and Editable Policies

๐Ÿ“… 2025-01-07
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Deep reinforcement learning (DRL) policies are typically opaque and difficult to modify. To address this, we propose Neural DNF-MTโ€”a novel end-to-end trainable framework that yields interpretable, logically derivable, and human-editable DRL policies. Our approach introduces three key contributions: (1) a differentiable Disjunctive Normal Form (DNF) architecture compatible with Actor-Critic training; (2) a faithful, direct translation from the neural policy to deterministic or probabilistic logic programs; and (3) support for human-driven policy editing via logical rules, coupled with a re-injection mechanism that backpropagates edits to update neural parameters. Evaluated on diverse partially observable tasks, Neural DNF-MT matches the performance of black-box DRL baselines while producing highly readable, formally verifiable logical policies. Crucially, edited policies demonstrate effective zero-shot transfer and adaptation, validating the frameworkโ€™s practical utility for transparent, controllable, and maintainable RL deployment.

Technology Category

Application Category

๐Ÿ“ Abstract
Although deep reinforcement learning has been shown to be effective, the model's black-box nature presents barriers to direct policy interpretation. To address this problem, we propose a neuro-symbolic approach called neural DNF-MT for end-to-end policy learning. The differentiable nature of the neural DNF-MT model enables the use of deep actor-critic algorithms for training. At the same time, its architecture is designed so that trained models can be directly translated into interpretable policies expressed as standard (bivalent or probabilistic) logic programs. Moreover, additional layers can be included to extract abstract features from complex observations, acting as a form of predicate invention. The logic representations are highly interpretable, and we show how the bivalent representations of deterministic policies can be edited and incorporated back into a neural model, facilitating manual intervention and adaptation of learned policies. We evaluate our approach on a range of tasks requiring learning deterministic or stochastic behaviours from various forms of observations. Our empirical results show that our neural DNF-MT model performs at the level of competing black-box methods whilst providing interpretable policies.
Problem

Research questions and friction points this paper is trying to address.

Deep Reinforcement Learning
Transparency
Interpretability
Innovation

Methods, ideas, or system contributions that make the work stand out.

Neural DNF-MT
Interpretable Reinforcement Learning
Hybrid Neural-Symbolic Processing
๐Ÿ”Ž Similar Papers
No similar papers found.