Self-Explaining Reinforcement Learning for Mobile Network Resource Allocation

πŸ“… 2025-09-18
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
To address the limited interpretability and low trustworthiness of deep reinforcement learning (DRL) models in mobile network resource allocation, this paper proposes a novel framework integrating self-explaining neural networks (SENNs) with DRL. The method constructs an interpretable policy network in a low-dimensional state space, enabling simultaneous generation of robust local and global explanations without requiring post-hoc interpretability modules. Its key innovation lies in embedding SENNs’ intrinsic interpretability directly into the DRL policy learning process, thereby achieving joint optimization of performance and transparency. Experimental results demonstrate that the proposed approach achieves state-of-the-art (SOTA) performance on resource allocation tasks while providing semantically clear and mathematically verifiable decision rationales. This significantly enhances the trustworthiness and deployability of AI systems in safety- and mission-critical domains.

Technology Category

Application Category

πŸ“ Abstract
Reinforcement Learning (RL) methods that incorporate deep neural networks (DNN), though powerful, often lack transparency. Their black-box characteristic hinders interpretability and reduces trustworthiness, particularly in critical domains. To address this challenge in RL tasks, we propose a solution based on Self-Explaining Neural Networks (SENNs) along with explanation extraction methods to enhance interpretability while maintaining predictive accuracy. Our approach targets low-dimensionality problems to generate robust local and global explanations of the model's behaviour. We evaluate the proposed method on the resource allocation problem in mobile networks, demonstrating that SENNs can constitute interpretable solutions with competitive performance. This work highlights the potential of SENNs to improve transparency and trust in AI-driven decision-making for low-dimensional tasks. Our approach strong performance on par with the existing state-of-the-art methods, while providing robust explanations.
Problem

Research questions and friction points this paper is trying to address.

Enhancing interpretability in reinforcement learning for transparency
Addressing black-box characteristics in deep neural network models
Providing robust explanations for mobile network resource allocation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Self-Explaining Neural Networks for interpretability
Explanation extraction methods maintain predictive accuracy
Generates robust local and global explanations
πŸ”Ž Similar Papers
No similar papers found.