Scaling Equilibrium Propagation to Deeper Neural Network Architectures

πŸ“… 2025-09-30
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Equilibrium Propagation (EP) suffers from poor scalability to deep networks and significantly lags behind backpropagation (BP) in performance. To address this, we propose Hopfield-ResNet: a novel architecture integrating Hopfield network dynamics, residual connections, and truncated ReLU activations to enhance training stability and convergence under EP. Furthermore, we design a biologically plausible yet computationally efficient EP variant based on local gradients. Experiments demonstrate that our approach nearly doubles the maximum trainable depth for EPβ€”scaling it to deeper architectures than previously feasible. On CIFAR-10, Hopfield-ResNet achieves 93.92% test accuracy, improving upon the prior state-of-the-art EP result by approximately 3.5 percentage points. This substantially narrows the performance gap with comparably sized BP-trained networks, marking a significant step toward scalable, biologically inspired deep learning.

Technology Category

Application Category

πŸ“ Abstract
Equilibrium propagation has been proposed as a biologically plausible alternative to the backpropagation algorithm. The local nature of gradient computations, combined with the use of convergent RNNs to reach equilibrium states, make this approach well-suited for implementation on neuromorphic hardware. However, previous studies on equilibrium propagation have been restricted to networks containing only dense layers or relatively small architectures with a few convolutional layers followed by a final dense layer. These networks have a significant gap in accuracy compared to similarly sized feedforward networks trained with backpropagation. In this work, we introduce the Hopfield-Resnet architecture, which incorporates residual (or skip) connections in Hopfield networks with clipped $mathrm{ReLU}$ as the activation function. The proposed architectural enhancements enable the training of networks with nearly twice the number of layers reported in prior works. For example, Hopfield-Resnet13 achieves 93.92% accuracy on CIFAR-10, which is $approx$3.5% higher than the previous best result and comparable to that provided by Resnet13 trained using backpropagation.
Problem

Research questions and friction points this paper is trying to address.

Scaling Equilibrium Propagation to deeper neural networks
Improving accuracy gap compared to backpropagation-trained networks
Enabling deeper architectures with residual connections in Hopfield networks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Incorporates residual connections in Hopfield networks
Uses clipped ReLU as the activation function
Enables training networks with nearly twice layers
πŸ”Ž Similar Papers
No similar papers found.
S
Sankar Vinayak. E. P
Dept. of Computer Science and Engineering, Indian Institute of Technology, Madras
Gopalakrishnan Srinivasan
Gopalakrishnan Srinivasan
Assistant Professor at IIT Madras
RISC-V SoCAI Accelerator ArchitecturesDeep LearningSpiking Neural Networks