π€ AI Summary
To address memory bandwidth bottlenecks and high computational overhead in large language model (LLM) inference, this work proposes a hybrid-precision acceleration architecture leveragingεε NPU-PIM integration. Methodologically, it introduces: (1) a novel dynamic quantization scheme supporting on-the-fly FP16/low-precision switching to balance accuracy and efficiency; (2) a lightweight low-precision PIM compute unit co-designed with the NPU to significantly reduce DRAM area and power consumption; and (3) joint optimization of KV cache and weight/activation quantization, coupled with kernel fusion to enhance data reuse. Evaluated on mainstream LLMs, the architecture achieves average speedups of 4.9Γ, 2.0Γ, and 3.4Γ over HBM-PIM, Ecco, and Pimba, respectively, while maintaining state-of-the-art quantization accuracy and energy efficiency.
π Abstract
The substantial memory bandwidth and computational demands of large language models (LLMs) present critical challenges for efficient inference. To tackle this, the literature has explored heterogeneous systems that combine neural processing units (NPUs) with DRAM-based processing-in-memory (PIM) for LLM acceleration. However, existing high-precision (e.g., FP16) PIM compute units incur significant area and power overhead in DRAM technology, limiting the effective computation throughput. In this paper, we introduce P3-LLM, a novel NPU-PIM integrated accelerator for LLM inference using hybrid numerical formats. Our approach is threefold: First, we propose a flexible mixed-precision quantization scheme, which leverages hybrid numerical formats to quantize different LLM operands with high compression efficiency and minimal accuracy loss. Second, we architect an efficient PIM accelerator for P3-LLM, featuring enhanced compute units to support hybrid numerical formats. Our careful choice of numerical formats allows to co-design low-precision PIM compute units that significantly boost the computation throughput under iso-area constraints. Third, we optimize the low-precision dataflow of different LLM modules by applying operator fusion to minimize the overhead of runtime dequantization. Evaluation on a diverse set of representative LLMs and tasks demonstrates that P3-LLM achieves state-of-the-art accuracy in terms of both KV-cache quantization and weight-activation quantization. Combining the proposed quantization scheme with PIM architecture co-design, P3-LLM yields an average of $4.9 imes$, $2.0 imes$, and $3.4 imes$ speedups over the state-of-the-art LLM accelerators HBM-PIM, Ecco, and Pimba, respectively. Our quantization code is available at https://github.com/yc2367/P3-LLM.git