🤖 AI Summary
This work addresses the high energy consumption and low throughput bottlenecks of large language models during low-rank adaptation (LoRA) inference. The authors propose a processing-in-memory (PIM)-based heterogeneous acceleration architecture that integrates customized processing elements via a 2D mesh interconnect. By co-designing optimized spatial mapping, dataflow scheduling, and a novel synergistic mechanism combining SRAM reprogramming with power gating (SRPG), the architecture enables pipelined LoRA updates and sublinear power scaling. Evaluated on the Llama-13B model with LoRA rank 8, the proposed design achieves 1.5× higher throughput and 25× better energy efficiency compared to an NVIDIA H100 GPU.
📝 Abstract
This paper presents PRIMAL, a processing-in-memory (PIM) based large language model (LLM) inference accelerator with low-rank adaptation (LoRA). PRIMAL integrates heterogeneous PIM processing elements (PEs), interconnected by 2D-mesh inter-PE computational network (IPCN). A novel SRAM reprogramming and power gating (SRPG) scheme enables pipelined LoRA updates and sub-linear power scaling by overlapping reconfiguration with computation and gating idle resources. PRIMAL employs optimized spatial mapping and dataflow orchestration to minimize communication overhead, and achieves $1.5\times$ throughput and $25\times$ energy efficiency over NVIDIA H100 with LoRA rank 8 (Q,V) on Llama-13B.