Preserving Plasticity in Continual Learning with Adaptive Linearity Injection

📅 2025-05-14
📈 Citations: 0
✨ Influential: 0
📄 PDF
🤖 AI Summary
In continual learning, deep neural networks suffer from plasticity degradation—i.e., diminished capacity to adapt to non-stationary task sequences—severely hindering incremental learning. To address this, we propose Adaptive Linear Injection (AdaLin), the first neuron-level learnable gating mechanism that dynamically modulates the linearity of each neuron’s activation function. AdaLin fundamentally alleviates gradient blocking and plasticity decay at the activation-function level, without requiring task boundaries, auxiliary hyperparameters, explicit regularization, or network resets. It is fully compatible with mainstream activation functions—including ReLU, Tanh, and GeLU—and offers plug-and-play integration. Extensive evaluations across diverse continual learning benchmarks—including Permuted/Random Label MNIST/CIFAR, Class-Incremental CIFAR-100, ResNet-18-based class-incremental learning, and off-policy reinforcement learning—demonstrate consistent improvements in both continual learning performance and generalization stability.

Technology Category

Application Category

📝 Abstract
Loss of plasticity in deep neural networks is the gradual reduction in a model's capacity to incrementally learn and has been identified as a key obstacle to learning in non-stationary problem settings. Recent work has shown that deep linear networks tend to be resilient towards loss of plasticity. Motivated by this observation, we propose Adaptive Linearization (AdaLin), a general approach that dynamically adapts each neuron's activation function to mitigate plasticity loss. Unlike prior methods that rely on regularization or periodic resets, AdaLin equips every neuron with a learnable parameter and a gating mechanism that injects linearity into the activation function based on its gradient flow. This adaptive modulation ensures sufficient gradient signal and sustains continual learning without introducing additional hyperparameters or requiring explicit task boundaries. When used with conventional activation functions like ReLU, Tanh, and GeLU, we demonstrate that AdaLin can significantly improve performance on standard benchmarks, including Random Label and Permuted MNIST, Random Label and Shuffled CIFAR-10, and Class-Split CIFAR-100. Furthermore, its efficacy is shown in more complex scenarios, such as class-incremental learning on CIFAR-100 with a ResNet-18 backbone, and in mitigating plasticity loss in off-policy reinforcement learning agents. We perform a systematic set of ablations that show that neuron-level adaptation is crucial for good performance and analyze a number of metrics in the network that might be correlated to loss of plasticity.
Problem

Research questions and friction points this paper is trying to address.

Mitigating plasticity loss in continual learning
Adapting neuron activation functions dynamically
Enhancing performance in non-stationary learning tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Adaptive Linearization (AdaLin) dynamically adjusts activation functions
Learnable parameters and gating mechanism inject linearity
Improves continual learning without extra hyperparameters
🔎 Similar Papers
No similar papers found.