Plasticine: Accelerating Research in Plasticity-Motivated Deep Reinforcement Learning

📅 2025-04-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In deep reinforcement learning, neural networks often suffer from plasticity loss—i.e., diminished capacity to adapt to new tasks—due to continual training, yet no standardized benchmark or quantification protocol exists for systematic evaluation. To address this gap, we propose PlasTorch, the first open-source framework for plasticity assessment. It introduces a standardized non-stationary learning benchmark comprising 13 plasticity-preserving methods and 10 quantitative plasticity metrics, along with a gradient-aware task dynamics generation mechanism. Implemented as a single PyTorch file, PlasTorch supports online plasticity measurement, trajectory visualization, and integration with open environments including Procgen and Meta-World. Empirical evaluation demonstrates that our approach reduces plasticity decay by up to 47% in both standard and open environments, significantly enhancing long-term policy adaptability and stability. This work establishes the first comprehensive, reproducible methodology for evaluating plasticity in deep RL, filling a critical gap in systematic plasticity assessment.

Technology Category

Application Category

📝 Abstract
Developing lifelong learning agents is crucial for artificial general intelligence. However, deep reinforcement learning (RL) systems often suffer from plasticity loss, where neural networks gradually lose their ability to adapt during training. Despite its significance, this field lacks unified benchmarks and evaluation protocols. We introduce Plasticine, the first open-source framework for benchmarking plasticity optimization in deep RL. Plasticine provides single-file implementations of over 13 mitigation methods, 10 evaluation metrics, and learning scenarios with increasing non-stationarity levels from standard to open-ended environments. This framework enables researchers to systematically quantify plasticity loss, evaluate mitigation strategies, and analyze plasticity dynamics across different contexts. Our documentation, examples, and source code are available at https://github.com/RLE-Foundation/Plasticine.
Problem

Research questions and friction points this paper is trying to address.

Addressing plasticity loss in deep reinforcement learning systems
Lack of unified benchmarks for plasticity optimization in RL
Providing tools to evaluate and mitigate plasticity loss
Innovation

Methods, ideas, or system contributions that make the work stand out.

Open-source framework for plasticity optimization
Single-file implementations of 13 methods
10 metrics and diverse learning scenarios
🔎 Similar Papers
No similar papers found.
Mingqi Yuan
Mingqi Yuan
PhD candidate at HKPU
Machine Learning
Q
Qi Wang
MoE Key Lab of Artificial Intelligence, AI Institute, Shanghai Jiao Tong University, China
Guozheng Ma
Guozheng Ma
Nanyang Technological University
Reinforcement LearningDeep Learning
B
Bo Li
Department of Computing, The Hong Kong Polytechnic University, China
X
Xin Jin
Ningbo Institute of Digital Twin, Eastern Institute of Technology, Ningbo, China
Y
Yunbo Wang
MoE Key Lab of Artificial Intelligence, AI Institute, Shanghai Jiao Tong University, China
T
Tao Yu
X
Xiaokang Yang
MoE Key Lab of Artificial Intelligence, AI Institute, Shanghai Jiao Tong University, China
W
Wenjun Zeng
Ningbo Institute of Digital Twin, Eastern Institute of Technology, Ningbo, China
Dacheng Tao
Dacheng Tao
Nanyang Technological University
artificial intelligencemachine learningcomputer visionimage processingdata mining