π€ AI Summary
To address energy-efficiency and performance bottlenecks in large language model (LLM) inference caused by the memory wall, this paper proposes CompAir, a hybrid in-memory computing (IMC) architecture that uniquely integrates DRAM-based PIM (Processing-in-Memory) and SRAM-based PIM, augmented with Network-on-Chipβin-Compute (NoC-in-Compute) capabilities. CompAir employs multi-granularity data paths and a hierarchical instruction set to balance operator diversity, computational flexibility, and hardware efficiency. It leverages hybrid bonding to achieve high-bandwidth heterogeneous PIM integration and embeds programmable ALUs within the NoC to support non-linear operations during data migration. Evaluation shows that CompAir achieves 1.83β7.98Γ and 1.95β6.28Γ speedup over pure-PIM baselines in the prefill and decoding phases, respectively. Against an A100 GPU coupled with HBM-based PIM, CompAir delivers 3.52Γ higher energy efficiency while maintaining comparable throughput.
π Abstract
The rapid advancement of Large Language Models (LLMs) has revolutionized various aspects of human life, yet their immense computational and energy demands pose significant challenges for efficient inference. The memory wall, the growing processor-memory speed disparity, remains a critical bottleneck for LLM. Process-In-Memory (PIM) architectures overcome limitations by co-locating compute units with memory, leveraging 5-20$ imes$ higher internal bandwidth and enabling greater energy efficiency than GPUs. However, existing PIMs struggle to balance flexibility, performance, and cost-efficiency for LLMs' dynamic memory-compute patterns and operator diversity. DRAM-PIM suffers from inter-bank communication overhead despite its vector parallelism. SRAM-PIM offers sub-10ns latency for matrix operation but is constrained by limited capacity. This work introduces CompAir, a novel PIM architecture that integrates DRAM-PIM and SRAM-PIM with hybrid bonding, enabling efficient linear computations while unlocking multi-granularity data pathways. We further develop CompAir-NoC, an advanced network-on-chip with an embedded arithmetic logic unit that performs non-linear operations during data movement, simultaneously reducing communication overhead and area cost. Finally, we develop a hierarchical Instruction Set Architecture that ensures both flexibility and programmability of the hybrid PIM. Experimental results demonstrate that CompAir achieves 1.83-7.98$ imes$ prefill and 1.95-6.28$ imes$ decode improvement over the current state-of-the-art fully PIM architecture. Compared to the hybrid A100 and HBM-PIM system, CompAir achieves 3.52$ imes$ energy consumption reduction with comparable throughput. This work represents the first systematic exploration of hybrid DRAM-PIM and SRAM-PIM architectures with in-network computation capabilities, offering a high-efficiency solution for LLM.