π€ AI Summary
In analog in-memory computing (AIMC) hardware, device-level noise and asymmetric pulse responses of resistive elements induce implicit gradient penalties, compromising training stability and hindering convergence.
Method: This work establishes the first theoretical analysis framework for gradient-based training in AIMC, rigorously proving that the implicit penalty arises from nonlinear distortion of the device response function during weight updates. Building on this insight, we propose Tiki-Takaβa dual-array residual learning algorithm that jointly leverages analog-domain pulse-driven updates and two-level optimization to achieve precise, stable convergence to critical points under realistic device nonidealities.
Contribution/Results: Both theoretical analysis and circuit-level simulations demonstrate that Tiki-Taka effectively suppresses the implicit penalty effect, outperforming standard analog stochastic gradient descent (Analog SGD) in convergence behavior under typical asymmetric and noisy device responses. The method provides a provably sound and hardware-deployable paradigm for native training on AIMC accelerators.
π Abstract
As the economic and environmental costs of training and deploying large vision or language models increase dramatically, analog in-memory computing (AIMC) emerges as a promising energy-efficient solution. However, the training perspective, especially its training dynamic, is underexplored. In AIMC hardware, the trainable weights are represented by the conductance of resistive elements and updated using consecutive electrical pulses. Among all the physical properties of resistive elements, the response to the pulses directly affects the training dynamics. This paper first provides a theoretical foundation for gradient-based training on AIMC hardware and studies the impact of response functions. We demonstrate that noisy update and asymmetric response functions negatively impact Analog SGD by imposing an implicit penalty term on the objective. To overcome the issue, Tiki-Taka, a residual learning algorithm, converges exactly to a critical point by optimizing a main array and a residual array bilevelly. The conclusion is supported by simulations validating our theoretical insights.