Efficient Deployment of Transformer Models in Analog In-Memory Computing Hardware

📅 2024-11-26
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Transformer deployment on analog-in-memory computing (AIMC) hardware suffers from inflexibility—static analog weight mapping necessitates full-model retraining and costly hardware reprogramming for task adaptation. Method: We propose a digital-analog co-designed lightweight adaptation framework that eliminates analog-domain weight updates. Instead, it embeds digital low-rank adapters (LoRA-like) within the compute core, enabling on-chip, zero-shot task adaptation via fine-tuning only a small number of parameters. Contribution/Results: A single AIMC backbone supports multi-task sharing without retraining; on MobileBERT, our method matches or exceeds hardware-aware training (AHWA) in accuracy while reducing hardware adaptation latency by an order of magnitude. This significantly enhances task flexibility and energy efficiency of AIMC platforms.

Technology Category

Application Category

📝 Abstract
Analog in-memory computing (AIMC) has emerged as a promising solution to overcome the von Neumann bottleneck, accelerating neural network computations and improving computational efficiency. While AIMC has demonstrated success with architectures such as CNNs, MLPs, and RNNs, deploying transformer-based models using AIMC presents unique challenges. Transformers are expected to handle diverse downstream tasks and adapt to new user data or instructions after deployment, which requires more flexible approaches to suit AIMC constraints. In this paper, we propose a novel method for deploying pre-trained transformer models onto AIMC hardware. Unlike traditional approaches requiring hardware-aware training, our technique allows direct deployment without the need for retraining the original model. Instead, we utilize lightweight, low-rank adapters -- compact modules stored in digital cores -- to adapt the model to hardware constraints. We validate our approach on MobileBERT, demonstrating accuracy on par with, or even exceeding, a traditional hardware-aware training approach. Our method is particularly appealing in multi-task scenarios, as it enables a single analog model to be reused across multiple tasks. Moreover, it supports on-chip adaptation to new hardware constraints and tasks without updating analog weights, providing a flexible and versatile solution for real-world AI applications. Code is available.
Problem

Research questions and friction points this paper is trying to address.

Adapting transformers to analog in-memory computing hardware
Overcoming challenges of reprogramming analog devices efficiently
Enabling hardware and task adaptation without full retraining
Innovation

Methods, ideas, or system contributions that make the work stand out.

Low-rank adapters for analog hardware adaptation
Fixed meta-weights with external lightweight modules
Hybrid architecture balancing analog and digital processing