🤖 AI Summary
Energy-based learning algorithms—such as contrastive learning—lack rigorous convergence theory when implemented on analog hardware (e.g., tunable linear resistor networks), hindering their reliable deployment in neuromorphic and compute-in-memory systems.
Method: This work establishes, for the first time, an exact equivalence between contrastive learning dynamics on resistor networks and projected gradient descent on a convex energy function.
Contribution/Results: We rigorously prove that the iterative updates converge globally to a stable equilibrium for any fixed step size. This theoretical breakthrough fills a fundamental gap in convergence analysis for energy-driven learning on distributed analog hardware. It provides the first formally guaranteed analytical framework for such algorithms in physical implementations, significantly enhancing their credibility and deployability in brain-inspired computing architectures.
📝 Abstract
Energy-based learning algorithms are alternatives to backpropagation and are well-suited to distributed implementations in analog electronic devices. However, a rigorous theory of convergence is lacking. We make a first step in this direction by analysing a particular energy-based learning algorithm, Contrastive Learning, applied to a network of linear adjustable resistors. It is shown that, in this setup, Contrastive Learning is equivalent to projected gradient descent on a convex function, for any step size, giving a guarantee of convergence for the algorithm.