SAL: Selective Adaptive Learning for Backpropagation-Free Training with Sparsification

📅 2026-01-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Standard backpropagation suffers from biologically implausible assumptions—such as weight symmetry—and gradient interference caused by dense activation. This work proposes Selective Adaptive Learning (SAL), a novel training framework that circumvents explicit weight symmetry by partitioning the parameter space into sample-dependent, mutually exclusive regions and activating only relevant subsets during forward and backward passes. SAL integrates adaptive sparsification, selective parameter updates, and a refined feedback alignment strategy to deliver stable and efficient learning. The method achieves convergence speed and accuracy comparable to backpropagation across ten standard benchmarks, scales to models with up to 128 layers and billions of parameters, and demonstrates strong numerical stability and scalability without relying on symmetric weight transport.

Technology Category

Application Category

📝 Abstract
Standard deep learning relies on Backpropagation (BP), which is constrained by biologically implausible weight symmetry and suffers from significant gradient interference within dense representations. To mitigate these bottlenecks, we propose Selective Adaptive Learning (SAL), a training method that combines selective parameter activation with adaptive area partitioning. Specifically, SAL decomposes the parameter space into mutually exclusive, sample-dependent regions. This decoupling mitigates gradient interference across divergent semantic patterns and addresses explicit weight symmetry requirements through our refined feedback alignment. Empirically, SAL demonstrates competitive convergence rates, leading to improved classification performance across 10 standard benchmarks. Additionally, SAL achieves numerical consistency and competitive accuracy even in deep regimes (up to 128 layers) and large-scale models (up to 1B parameters). Our approach is loosely inspired by biological learning mechanisms, offering a plausible alternative that contributes to the study of scalable neural network training.
Problem

Research questions and friction points this paper is trying to address.

Backpropagation
weight symmetry
gradient interference
dense representations
biological plausibility
Innovation

Methods, ideas, or system contributions that make the work stand out.

Selective Adaptive Learning
Backpropagation-Free Training
Gradient Interference Mitigation
Feedback Alignment
Sparsification
🔎 Similar Papers
No similar papers found.
F
Fanping Liu
ROCK AI; Renmin University of China
Hua Yang
Hua Yang
Redrock Biometrics
BiometricsMotion TrackingComputer VisionAugmented RealityImage Processing
J
Jiasi Zou
ROCK AI