Network Sparsity Unlocks the Scaling Potential of Deep Reinforcement Learning

📅 2025-06-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Deep reinforcement learning (DRL) models often suffer from training instability and performance degradation during scaling due to neural network pathologies—such as gradient interference and plasticity collapse—while existing architectural mitigations (e.g., LayerNorm, periodic reset) incur non-negligible computational overhead. Method: We propose a lightweight static sparsification strategy: a single, one-time random pruning step applied prior to training, introducing no additional parameters or runtime computation. Contribution/Results: We provide the first theoretical analysis and empirical validation demonstrating that such simple static sparsification can effectively replace complex architectural modifications. Across visual and streaming DRL settings, it significantly improves training stability, parameter efficiency, and robustness to interference. On multi-task benchmarks, sparsified models consistently outperform dense baselines, thereby alleviating a key bottleneck in DRL model scaling.

Technology Category

Application Category

📝 Abstract
Effectively scaling up deep reinforcement learning models has proven notoriously difficult due to network pathologies during training, motivating various targeted interventions such as periodic reset and architectural advances such as layer normalization. Instead of pursuing more complex modifications, we show that introducing static network sparsity alone can unlock further scaling potential beyond their dense counterparts with state-of-the-art architectures. This is achieved through simple one-shot random pruning, where a predetermined percentage of network weights are randomly removed once before training. Our analysis reveals that, in contrast to naively scaling up dense DRL networks, such sparse networks achieve both higher parameter efficiency for network expressivity and stronger resistance to optimization challenges like plasticity loss and gradient interference. We further extend our evaluation to visual and streaming RL scenarios, demonstrating the consistent benefits of network sparsity.
Problem

Research questions and friction points this paper is trying to address.

Scaling deep reinforcement learning models effectively
Addressing network pathologies during training
Improving parameter efficiency and optimization resistance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Static network sparsity enables scaling
One-shot random pruning before training
Sparse networks resist optimization challenges
🔎 Similar Papers