🤖 AI Summary
Static sparse training often suffers from structural rigidity due to early pruning, which impedes adaptation to the dynamic distribution shifts induced by policy evolution in deep reinforcement learning. To address this limitation, this work proposes a lightweight, one-shot topology-aware connection revival mechanism: after initial static pruning, a small budget is allocated per layer according to topological demands, and a subset of pruned connections is uniformly and randomly restored, with the resulting sparse architecture fixed for the remainder of training. This approach achieves substantial performance and robustness gains without requiring dynamic rewiring. Experiments on continuous control benchmarks using SAC and TD3 demonstrate that the method improves final returns by up to 37.9% over static sparse baselines and surpasses existing dynamic sparse methods by a median margin of 13.5%.
📝 Abstract
Static sparse training is a promising route to efficient learning by committing to a fixed mask pattern, yet the constrained structure reduces robustness. Early pruning decisions can lock the network into a brittle structure that is difficult to escape, especially in deep reinforcement learning (RL) where the evolving policy continually shifts the training distribution. We propose Topology-Aware Revival (TAR), a lightweight one-shot post-pruning procedure that improves static sparsity without dynamic rewiring. After static pruning, TAR performs a single revival step by allocating a small reserve budget across layers according to topology needs, randomly uniformly reactivating a few previously pruned connections within each layer, and then keeping the resulting connectivity fixed for the remainder of training. Across multiple continuous-control tasks with SAC and TD3, TAR improves final return over static sparse baselines by up to +37.9% and also outperforms dynamic sparse training baselines with a median gain of +13.5%.