PRIMAL: Processing-In-Memory Based Low-Rank Adaptation for LLM Inference Accelerator

📅 2026-01-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the high energy consumption and low throughput bottlenecks of large language models during low-rank adaptation (LoRA) inference. The authors propose a processing-in-memory (PIM)-based heterogeneous acceleration architecture that integrates customized processing elements via a 2D mesh interconnect. By co-designing optimized spatial mapping, dataflow scheduling, and a novel synergistic mechanism combining SRAM reprogramming with power gating (SRPG), the architecture enables pipelined LoRA updates and sublinear power scaling. Evaluated on the Llama-13B model with LoRA rank 8, the proposed design achieves 1.5× higher throughput and 25× better energy efficiency compared to an NVIDIA H100 GPU.

Technology Category

Application Category

📝 Abstract
This paper presents PRIMAL, a processing-in-memory (PIM) based large language model (LLM) inference accelerator with low-rank adaptation (LoRA). PRIMAL integrates heterogeneous PIM processing elements (PEs), interconnected by 2D-mesh inter-PE computational network (IPCN). A novel SRAM reprogramming and power gating (SRPG) scheme enables pipelined LoRA updates and sub-linear power scaling by overlapping reconfiguration with computation and gating idle resources. PRIMAL employs optimized spatial mapping and dataflow orchestration to minimize communication overhead, and achieves $1.5\times$ throughput and $25\times$ energy efficiency over NVIDIA H100 with LoRA rank 8 (Q,V) on Llama-13B.
Problem

Research questions and friction points this paper is trying to address.

Processing-in-Memory
Large Language Model
Low-Rank Adaptation
Inference Acceleration
Hardware Efficiency
Innovation

Methods, ideas, or system contributions that make the work stand out.

Processing-in-Memory
Low-Rank Adaptation
PIM Accelerator
SRAM Reprogramming
Energy Efficiency
Y
Yue Jiet Chong
Department of Electrical and Computer Engineering, National University of Singapore, Singapore
Yimin Wang
Yimin Wang
National University of Singapore | Fudan University | Jilin University
Circuits and SystemsIn-Memory ComputingAI AcceleratorHardware/Software Co-Design
Z
Zhen Wu
Department of Electrical and Computer Engineering, National University of Singapore, Singapore
Xuanyao Fong
Xuanyao Fong
National University of Singapore
Hardware-software co-designemerging technologiescompact modelingelectronics simulations