🤖 AI Summary
Energy consumption bottlenecks in AI training—termed the “update wall” and “integration wall”—hinder scalability, especially for brain-scale models. Method: We establish, for the first time, a theoretical lower bound on energy efficiency for neuromorphic learning-in-memory (LIM) training under nonequilibrium conditions. By adaptively modulating memory energy barriers to align with the Lyapunov dynamics of gradient descent, we achieve energetically optimal coordination between synaptic update and integration. Extending Landauer’s principle to system-level AI training modeling, we combine Lyapunov stability analysis with model-agnostic lower-bound derivation. Contribution/Results: Our framework reveals a fundamental trade-off between energy consumption and training speed. For brain-scale AI (10¹⁵ parameters), the derived energy lower bound for LIM training is 10⁸–10⁹ J—improving upon current hardware energy-efficiency limits by six to seven orders of magnitude.
📝 Abstract
Learning-in-memory (LIM) is a recently proposed paradigm to overcome fundamental memory bottlenecks in training machine learning systems. While compute-in-memory (CIM) approaches can address the so-called memory-wall (i.e. energy dissipated due to repeated memory read access) they are agnostic to the energy dissipated due to repeated memory writes at the precision required for training (the update-wall), and they don't account for the energy dissipated when transferring information between short-term and long-term memories (the consolidation-wall). The LIM paradigm proposes that these bottlenecks, too, can be overcome if the energy barrier of physical memories is adaptively modulated such that the dynamics of memory updates and consolidation match the Lyapunov dynamics of gradient-descent training of an AI model. In this paper, we derive new theoretical lower bounds on energy dissipation when training AI systems using different LIM approaches. The analysis presented here is model-agnostic and highlights the trade-off between energy efficiency and the speed of training. The resulting non-equilibrium energy-efficiency bounds have a similar flavor as that of Landauer's energy-dissipation bounds. We also extend these limits by taking into account the number of floating-point operations (FLOPs) used for training, the size of the AI model, and the precision of the training parameters. Our projections suggest that the energy-dissipation lower-bound to train a brain scale AI system (comprising of $10^{15}$ parameters) using LIM is $10^8 sim 10^9$ Joules, which is on the same magnitude the Landauer's adiabatic lower-bound and $6$ to $7$ orders of magnitude lower than the projections obtained using state-of-the-art AI accelerator hardware lower-bounds.