Convergence of energy-based learning in linear resistive networks

📅 2025-03-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Energy-based learning algorithms—such as contrastive learning—lack rigorous convergence theory when implemented on analog hardware (e.g., tunable linear resistor networks), hindering their reliable deployment in neuromorphic and compute-in-memory systems. Method: This work establishes, for the first time, an exact equivalence between contrastive learning dynamics on resistor networks and projected gradient descent on a convex energy function. Contribution/Results: We rigorously prove that the iterative updates converge globally to a stable equilibrium for any fixed step size. This theoretical breakthrough fills a fundamental gap in convergence analysis for energy-driven learning on distributed analog hardware. It provides the first formally guaranteed analytical framework for such algorithms in physical implementations, significantly enhancing their credibility and deployability in brain-inspired computing architectures.

Technology Category

Application Category

📝 Abstract
Energy-based learning algorithms are alternatives to backpropagation and are well-suited to distributed implementations in analog electronic devices. However, a rigorous theory of convergence is lacking. We make a first step in this direction by analysing a particular energy-based learning algorithm, Contrastive Learning, applied to a network of linear adjustable resistors. It is shown that, in this setup, Contrastive Learning is equivalent to projected gradient descent on a convex function, for any step size, giving a guarantee of convergence for the algorithm.
Problem

Research questions and friction points this paper is trying to address.

Lack of rigorous convergence theory for energy-based learning algorithms.
Analysis of Contrastive Learning in linear resistive networks.
Equivalence to projected gradient descent ensuring algorithm convergence.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Energy-based learning in linear resistive networks
Contrastive Learning as projected gradient descent
Convergence guarantee for any step size
🔎 Similar Papers
No similar papers found.
Anne-Men Huijzer
Anne-Men Huijzer
Rijksuniversiteit Groningen, CogniGron
Systems and Control theory
Thomas Chaffey
Thomas Chaffey
University of Sydney
control theorysystem theorycontrol
Bart Besselink
Bart Besselink
University of Groningen
Systems and control
H
Henk J. van Waarde
Bernoulli Institute for Mathematics, Computer Science, and Artificial Intelligence, University of Groningen, Groningen, The Netherlands