π€ AI Summary
Equilibrium Propagation (EP) suffers from poor scalability to deep networks and significantly lags behind backpropagation (BP) in performance. To address this, we propose Hopfield-ResNet: a novel architecture integrating Hopfield network dynamics, residual connections, and truncated ReLU activations to enhance training stability and convergence under EP. Furthermore, we design a biologically plausible yet computationally efficient EP variant based on local gradients. Experiments demonstrate that our approach nearly doubles the maximum trainable depth for EPβscaling it to deeper architectures than previously feasible. On CIFAR-10, Hopfield-ResNet achieves 93.92% test accuracy, improving upon the prior state-of-the-art EP result by approximately 3.5 percentage points. This substantially narrows the performance gap with comparably sized BP-trained networks, marking a significant step toward scalable, biologically inspired deep learning.
π Abstract
Equilibrium propagation has been proposed as a biologically plausible alternative to the backpropagation algorithm. The local nature of gradient computations, combined with the use of convergent RNNs to reach equilibrium states, make this approach well-suited for implementation on neuromorphic hardware. However, previous studies on equilibrium propagation have been restricted to networks containing only dense layers or relatively small architectures with a few convolutional layers followed by a final dense layer. These networks have a significant gap in accuracy compared to similarly sized feedforward networks trained with backpropagation. In this work, we introduce the Hopfield-Resnet architecture, which incorporates residual (or skip) connections in Hopfield networks with clipped $mathrm{ReLU}$ as the activation function. The proposed architectural enhancements enable the training of networks with nearly twice the number of layers reported in prior works. For example, Hopfield-Resnet13 achieves 93.92% accuracy on CIFAR-10, which is $approx$3.5% higher than the previous best result and comparable to that provided by Resnet13 trained using backpropagation.