Closed-Form Feedback-Free Learning with Forward Projection

📅 2025-01-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Conventional neural network training relies on backpropagation, which violates biological plausibility and practical constraints such as feedback-free communication. Method: This paper proposes Forward Projection (FP), a novel learning paradigm that performs weight updates via a single forward pass—leveraging inter-layer nonlinear mappings to generate target membrane potentials and solving for local weights in closed form—eliminating gradient computation and feedback signals entirely. Contribution/Results: FP yields inherently interpretable, class-predictive membrane potentials in hidden layers. Evaluated on four biomedical datasets, FP achieves superior generalization in few-shot settings compared to backpropagation; matches the accuracy of local-gradient methods in large-sample regimes while significantly accelerating training; and provides clinically meaningful interpretability by precisely identifying diagnostically critical features. To our knowledge, FP is the first neural training framework that is fully feedforward, feedback-free, and analytically solvable.

Technology Category

Application Category

📝 Abstract
State-of-the-art methods for backpropagation-free learning employ local error feedback to direct iterative optimisation via gradient descent. In this study, we examine the more restrictive setting where retrograde communication from neuronal outputs is unavailable for pre-synaptic weight optimisation. To address this challenge, we propose Forward Projection (FP). This novel randomised closed-form training method requires only a single forward pass over the entire dataset for model fitting, without retrograde communication. Target values for pre-activation membrane potentials are generated layer-wise via nonlinear projections of pre-synaptic inputs and the labels. Local loss functions are optimised over pre-synaptic inputs using closed-form regression, without feedback from neuronal outputs or downstream layers. Interpretability is a key advantage of FP training; membrane potentials of hidden neurons in FP-trained networks encode information which is interpretable layer-wise as label predictions. We demonstrate the effectiveness of FP across four biomedical datasets. In few-shot learning tasks, FP yielded more generalisable models than those optimised via backpropagation. In large-sample tasks, FP-based models achieve generalisation comparable to gradient descent-based local learning methods while requiring only a single forward propagation step, achieving significant speed up for training. Interpretation functions defined on local neuronal activity in FP-based models successfully identified clinically salient features for diagnosis in two biomedical datasets. Forward Projection is a computationally efficient machine learning approach that yields interpretable neural network models without retrograde communication of neuronal activity during training.
Problem

Research questions and friction points this paper is trying to address.

Neural Networks
Weight Adjustment
Medical Data Processing
Innovation

Methods, ideas, or system contributions that make the work stand out.

Forward Projection
Single Pass Learning
Medical Data Analysis
R
Robert O'Shea
Centre for Intelligent Information Processing Systems, Department of Engineering, King’s College London, Strand, London, WC2R 2LS, London, UK
Bipin Rajendran
Bipin Rajendran
Professor of Intelligent Computing Systems at King's College London
Nanoscale logic and memory devicesneuromorphic computation