Mastering Continual Reinforcement Learning through Fine-Grained Sparse Network Allocation and Dormant Neuron Exploration

📅 2025-03-07
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address catastrophic forgetting—the fundamental trade-off between plasticity and stability in continual reinforcement learning—this paper proposes the SSDE framework. Methodologically, SSDE integrates structured sparse coding, task-adaptive parameter modulation, and a sparse policy-network inference–retraining闭环. Its key contributions are: (1) a novel fine-grained structured sparse parameter co-allocation mechanism that enables efficient network compression while facilitating cross-task knowledge sharing; and (2) a sensitivity-driven dormant neuron reactivation strategy that dynamically balances parameter freezing and unfreezing, thereby enhancing exploratory capability and cross-task transferability. Evaluated on the CW10-v1 benchmark, SSDE achieves a 95% task success rate—substantially outperforming state-of-the-art methods—and demonstrates synergistic optimization of high plasticity and strong stability.

Technology Category

Application Category

📝 Abstract
Continual Reinforcement Learning (CRL) is essential for developing agents that can learn, adapt, and accumulate knowledge over time. However, a fundamental challenge persists as agents must strike a delicate balance between plasticity, which enables rapid skill acquisition, and stability, which ensures long-term knowledge retention while preventing catastrophic forgetting. In this paper, we introduce SSDE, a novel structure-based approach that enhances plasticity through a fine-grained allocation strategy with Structured Sparsity and Dormant-guided Exploration. SSDE decomposes the parameter space into forward-transfer (frozen) parameters and task-specific (trainable) parameters. Crucially, these parameters are allocated by an efficient co-allocation scheme under sparse coding, ensuring sufficient trainable capacity for new tasks while promoting efficient forward transfer through frozen parameters. However, structure-based methods often suffer from rigidity due to the accumulation of non-trainable parameters, limiting exploration and adaptability. To address this, we further introduce a sensitivity-guided neuron reactivation mechanism that systematically identifies and resets dormant neurons, which exhibit minimal influence in the sparse policy network during inference. This approach effectively enhance exploration while preserving structural efficiency. Extensive experiments on the CW10-v1 Continual World benchmark demonstrate that SSDE achieves state-of-the-art performance, reaching a success rate of 95%, surpassing prior methods significantly in both plasticity and stability trade-offs (code is available at: https://github.com/chengqiArchy/SSDE).
Problem

Research questions and friction points this paper is trying to address.

Balancing plasticity and stability in continual reinforcement learning.
Enhancing exploration and adaptability in sparse network structures.
Preventing catastrophic forgetting while acquiring new skills efficiently.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Fine-grained sparse network allocation strategy
Dormant neuron reactivation mechanism
Structured sparsity for efficient forward transfer
🔎 Similar Papers
No similar papers found.
Chengqi Zheng
Chengqi Zheng
Nanyang Technological University
Reinforcement LearningAgentic AI
Haiyan Yin
Haiyan Yin
Unknown affiliation
Reinforcement LearningMachine Learning
Jianda Chen
Jianda Chen
Nanyang Technological Univeristy
T
Terrence Ng
College of Computing and Data Science, Nanyang Technological University (NTU), Singapore
Y
Y. Ong
CFAR and IHPC, Agency for Science, Technology and Research (A*STAR), Singapore; College of Computing and Data Science, Nanyang Technological University (NTU), Singapore
I
Ivor Tsang
CFAR and IHPC, Agency for Science, Technology and Research (A*STAR), Singapore; College of Computing and Data Science, Nanyang Technological University (NTU), Singapore