Analog In-memory Training on General Non-ideal Resistive Elements: The Impact of Response Functions

πŸ“… 2025-02-10
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
In analog in-memory computing (AIMC) hardware, device-level noise and asymmetric pulse responses of resistive elements induce implicit gradient penalties, compromising training stability and hindering convergence. Method: This work establishes the first theoretical analysis framework for gradient-based training in AIMC, rigorously proving that the implicit penalty arises from nonlinear distortion of the device response function during weight updates. Building on this insight, we propose Tiki-Takaβ€”a dual-array residual learning algorithm that jointly leverages analog-domain pulse-driven updates and two-level optimization to achieve precise, stable convergence to critical points under realistic device nonidealities. Contribution/Results: Both theoretical analysis and circuit-level simulations demonstrate that Tiki-Taka effectively suppresses the implicit penalty effect, outperforming standard analog stochastic gradient descent (Analog SGD) in convergence behavior under typical asymmetric and noisy device responses. The method provides a provably sound and hardware-deployable paradigm for native training on AIMC accelerators.

Technology Category

Application Category

πŸ“ Abstract
As the economic and environmental costs of training and deploying large vision or language models increase dramatically, analog in-memory computing (AIMC) emerges as a promising energy-efficient solution. However, the training perspective, especially its training dynamic, is underexplored. In AIMC hardware, the trainable weights are represented by the conductance of resistive elements and updated using consecutive electrical pulses. Among all the physical properties of resistive elements, the response to the pulses directly affects the training dynamics. This paper first provides a theoretical foundation for gradient-based training on AIMC hardware and studies the impact of response functions. We demonstrate that noisy update and asymmetric response functions negatively impact Analog SGD by imposing an implicit penalty term on the objective. To overcome the issue, Tiki-Taka, a residual learning algorithm, converges exactly to a critical point by optimizing a main array and a residual array bilevelly. The conclusion is supported by simulations validating our theoretical insights.
Problem

Research questions and friction points this paper is trying to address.

Impact of response functions
Noisy update challenges
Residual learning solution
Innovation

Methods, ideas, or system contributions that make the work stand out.

Analog in-memory computing training
Impact of response functions
Residual learning algorithm Tiki-Taka
πŸ”Ž Similar Papers
No similar papers found.
Zhaoxian Wu
Zhaoxian Wu
Cornell University; Cornell Tech
OptimizationDeep LearningAnalog In-memory Computing
Q
Quan Xian
Rensselaer Polytechnic Institute, Troy, NY 12180, US
Tayfun Gokmen
Tayfun Gokmen
Princeton University, IBM T. J. Watson Research Center
O
Omobayode Fagbohungbe
IBM T. J. Watson Research Center, Yorktown Heights, NY 10598, US
T
Tianyi Chen
Rensselaer Polytechnic Institute, Troy, NY 12180, US