Synaptic Pruning: A Biological Inspiration for Deep Learning Regularization

📅 2025-08-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the lack of biological plausibility in conventional artificial neural network regularization, this paper proposes a biologically inspired, magnitude-based dynamic pruning regularization method—replacing dropout—motivated by synaptic pruning in the brain. The method progressively enforces global sparsification during training via a cubic scheduling strategy that adaptively adjusts the pruning rate without requiring an additional fine-tuning phase. Weight importance is uniformly assessed across layers based on absolute weight magnitudes, and binary masks are updated accordingly to ensure stable gradient flow. Evaluated on RNNs, LSTMs, and PatchTST for time-series forecasting across four benchmark datasets, the approach consistently outperforms baselines: it achieves up to 20% reduction in MAE for financial forecasting, and up to 52% improvement for certain Transformer-based models. Friedman tests confirm statistical significance (p < 0.01), and the method ranks first in overall performance.

Technology Category

Application Category

📝 Abstract
Synaptic pruning in biological brains removes weak connections to improve efficiency. In contrast, dropout regularization in artificial neural networks randomly deactivates neurons without considering activity-dependent pruning. We propose a magnitude-based synaptic pruning method that better reflects biology by progressively removing low-importance connections during training. Integrated directly into the training loop as a dropout replacement, our approach computes weight importance from absolute magnitudes across layers and applies a cubic schedule to gradually increase global sparsity. At fixed intervals, pruning masks permanently remove low-importance weights while maintaining gradient flow for active ones, eliminating the need for separate pruning and fine-tuning phases. Experiments on multiple time series forecasting models including RNN, LSTM, and Patch Time Series Transformer across four datasets show consistent gains. Our method ranked best overall, with statistically significant improvements confirmed by Friedman tests (p < 0.01). In financial forecasting, it reduced Mean Absolute Error by up to 20% over models with no or standard dropout, and up to 52% in select transformer models. This dynamic pruning mechanism advances regularization by coupling weight elimination with progressive sparsification, offering easy integration into diverse architectures. Its strong performance, especially in financial time series forecasting, highlights its potential as a practical alternative to conventional dropout techniques.
Problem

Research questions and friction points this paper is trying to address.

Proposing a biologically-inspired synaptic pruning method for neural networks
Replacing dropout with dynamic weight elimination during training
Improving time series forecasting accuracy across diverse models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Magnitude-based synaptic pruning mimics biology
Cubic schedule gradually increases global sparsity
Eliminates separate pruning and fine-tuning phases
🔎 Similar Papers
No similar papers found.