Dynamical Learning in Deep Asymmetric Recurrent Neural Networks

📅 2025-09-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
A fundamental gap exists between artificial intelligence and computational neuroscience regarding learning mechanisms, particularly the reliance of mainstream AI on gradient-based optimization—a biologically implausible process. Method: We propose a gradient-free, dynamic distributed learning paradigm based on an asymmetric deep recurrent neural network. Sparse excitatory connections are introduced to generate high-dimensional, dense, and geometrically stable internal representation manifolds; input–output mappings emerge naturally through recursive dynamics, and a geometrically grounded dynamic learning rule—leveraging the stability properties of attractor configurations—enables persistent convergence even after supervision is withdrawn. Contribution/Results: The model achieves competitive performance on standard AI benchmarks while maintaining biological plausibility and computational scalability. It offers a novel, brain-inspired alternative to gradient-based learning, advancing the pursuit of neuroscientifically grounded, scalable, and differentiable-free machine learning.

Technology Category

Application Category

📝 Abstract
We show that asymmetric deep recurrent neural networks, enhanced with additional sparse excitatory couplings, give rise to an exponentially large, dense accessible manifold of internal representations which can be found by different algorithms, including simple iterative dynamics. Building on the geometrical properties of the stable configurations, we propose a distributed learning scheme in which input-output associations emerge naturally from the recurrent dynamics, without any need of gradient evaluation. A critical feature enabling the learning process is the stability of the configurations reached at convergence, even after removal of the supervisory output signal. Extensive simulations demonstrate that this approach performs competitively on standard AI benchmarks. The model can be generalized in multiple directions, both computational and biological, potentially contributing to narrowing the gap between AI and computational neuroscience.
Problem

Research questions and friction points this paper is trying to address.

Developing asymmetric recurrent networks with sparse excitatory couplings
Creating stable internal representations without gradient evaluation
Enabling distributed learning through recurrent dynamics convergence
Innovation

Methods, ideas, or system contributions that make the work stand out.

Asymmetric deep recurrent networks with sparse couplings
Distributed learning without gradient evaluation
Stable configurations after supervisory signal removal
🔎 Similar Papers
No similar papers found.
D
Davide Badalotti
Department of Computing Sciences, Bocconi University, Milan, Italy
Carlo Baldassi
Carlo Baldassi
Bocconi University; ELLIS scholar
OptimizationStatistical MechanicsComputational BiologyComplex SystemsMachine Learning
M
Marc Mézard
Department of Computing Sciences, Bocconi University, Milan, Italy
M
Mattia Scardecchia
Department of Computing Sciences, Bocconi University, Milan, Italy
Riccardo Zecchina
Riccardo Zecchina
professor, theoretical physics, Bocconi University
statistical physicsoptimisation and inferencemachine learningcomputational biologycomputational neuroscience