🤖 AI Summary
This work addresses the challenge of plasticity loss in deep reinforcement learning caused by non-stationarity, which undermines continual learning capabilities. For the first time, it reveals two underlying mechanisms from the perspective of Neural Tangent Kernel (NTK) theory: rank collapse of the NTK Gram matrix and gradient magnitude decay at a Θ(1/k) rate. To mitigate gradient decay, the authors propose a lightweight, general-purpose sample-weight decay strategy that is orthogonal to existing methods. Integrated with experience replay in algorithms such as TD3, Double DQN, and SAC—and within the SimBa architecture—the approach consistently alleviates plasticity loss and stabilizes performance gains across multiple benchmarks, including MuJoCo, ALE, and DeepMind Control Suite. Notably, it achieves state-of-the-art results on the DMC Humanoid task.
📝 Abstract
Deep reinforcement learning (RL) suffers from plasticity loss severely due to the nature of non-stationarity, which impairs the ability to adapt to new data and learn continually. Unfortunately, our understanding of how plasticity loss arises, dissipates, and can be dissolved remains limited to empirical findings, leaving the theoretical end underexplored.To address this gap, we study the plasticity loss problem from the theoretical perspective of network optimization. By formally characterizing the two culprit factors in online RL process: the non-stationarity of data distributions and the non-stationarity of targets induced by bootstrapping, our theory attributes the loss of plasticity to two mechanisms: the rank collapse of the Neural Tangent Kernel (NTK) Gram matrix and the $Θ(\frac{1}{k})$ decay of gradient magnitude. The first mechanism echoes prior empirical findings from the theoretical perspective and sheds light on the effects of existing methods, e.g., network reset, neuron recycle, and noise injection. Against this backdrop, we focus primarily on the second mechanism and aim to alleviate plasticity loss by addressing the gradient attenuation issue, which is orthogonal to existing methods. We propose Sample Weight Decay -- a lightweight method to restore gradient magnitude, as a general remedy to plasticity loss for deep RL methods based on experience replay. In experiments, we evaluate the efficacy of \methodName upon TD3, \myadded{Double DQN} and SAC with SimBa architecture in MuJoCo, \myadded{ALE} and DeepMind Control Suite tasks. The results demonstrate that \methodName effectively alleviates plasticity loss and consistently improves learning performance across various configurations of deep RL algorithms, UTD, network architectures, and environments, achieving SOTA performance on challenging DMC Humanoid tasks.