A Simple Baseline for Stable and Plastic Neural Networks

📅 2025-07-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Continual learning inherently faces a trade-off between stability (preventing catastrophic forgetting) and plasticity (acquiring new tasks). To address this, we propose RDBP—a lightweight, parameter-free framework comprising two synergistic components: (i) ReLUDown, a modified activation function that mitigates neuron deactivation and enhances feature reuse; and (ii) Decreasing Backpropagation, which progressively freezes shallow-layer parameters during task progression to safeguard previously learned knowledge. Both mechanisms introduce no additional parameters or computational overhead during inference or training. Evaluated on the Continual ImageNet benchmark, RDBP achieves state-of-the-art accuracy while significantly reducing computational cost—demonstrating up to 30% lower FLOPs compared to leading methods. Our approach establishes a new efficiency-aware baseline for scalable continual visual learning, balancing performance, memory preservation, and resource efficiency without architectural or optimization complexity.

Technology Category

Application Category

📝 Abstract
Continual learning in computer vision requires that models adapt to a continuous stream of tasks without forgetting prior knowledge, yet existing approaches often tip the balance heavily toward either plasticity or stability. We introduce RDBP, a simple, low-overhead baseline that unites two complementary mechanisms: ReLUDown, a lightweight activation modification that preserves feature sensitivity while preventing neuron dormancy, and Decreasing Backpropagation, a biologically inspired gradient-scheduling scheme that progressively shields early layers from catastrophic updates. Evaluated on the Continual ImageNet benchmark, RDBP matches or exceeds the plasticity and stability of state-of-the-art methods while reducing computational cost. RDBP thus provides both a practical solution for real-world continual learning and a clear benchmark against which future continual learning strategies can be measured.
Problem

Research questions and friction points this paper is trying to address.

Balancing plasticity and stability in continual learning
Preventing neuron dormancy with lightweight activation modification
Reducing computational cost in continual learning models
Innovation

Methods, ideas, or system contributions that make the work stand out.

ReLUDown activation prevents neuron dormancy
Decreasing Backpropagation shields early layers
RDBP balances plasticity and stability efficiently
🔎 Similar Papers
No similar papers found.