CompAir: Synergizing Complementary PIMs and In-Transit NoC Computation for Efficient LLM Acceleration

πŸ“… 2025-09-17
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
To address energy-efficiency and performance bottlenecks in large language model (LLM) inference caused by the memory wall, this paper proposes CompAir, a hybrid in-memory computing (IMC) architecture that uniquely integrates DRAM-based PIM (Processing-in-Memory) and SRAM-based PIM, augmented with Network-on-Chip–in-Compute (NoC-in-Compute) capabilities. CompAir employs multi-granularity data paths and a hierarchical instruction set to balance operator diversity, computational flexibility, and hardware efficiency. It leverages hybrid bonding to achieve high-bandwidth heterogeneous PIM integration and embeds programmable ALUs within the NoC to support non-linear operations during data migration. Evaluation shows that CompAir achieves 1.83–7.98Γ— and 1.95–6.28Γ— speedup over pure-PIM baselines in the prefill and decoding phases, respectively. Against an A100 GPU coupled with HBM-based PIM, CompAir delivers 3.52Γ— higher energy efficiency while maintaining comparable throughput.

Technology Category

Application Category

πŸ“ Abstract
The rapid advancement of Large Language Models (LLMs) has revolutionized various aspects of human life, yet their immense computational and energy demands pose significant challenges for efficient inference. The memory wall, the growing processor-memory speed disparity, remains a critical bottleneck for LLM. Process-In-Memory (PIM) architectures overcome limitations by co-locating compute units with memory, leveraging 5-20$ imes$ higher internal bandwidth and enabling greater energy efficiency than GPUs. However, existing PIMs struggle to balance flexibility, performance, and cost-efficiency for LLMs' dynamic memory-compute patterns and operator diversity. DRAM-PIM suffers from inter-bank communication overhead despite its vector parallelism. SRAM-PIM offers sub-10ns latency for matrix operation but is constrained by limited capacity. This work introduces CompAir, a novel PIM architecture that integrates DRAM-PIM and SRAM-PIM with hybrid bonding, enabling efficient linear computations while unlocking multi-granularity data pathways. We further develop CompAir-NoC, an advanced network-on-chip with an embedded arithmetic logic unit that performs non-linear operations during data movement, simultaneously reducing communication overhead and area cost. Finally, we develop a hierarchical Instruction Set Architecture that ensures both flexibility and programmability of the hybrid PIM. Experimental results demonstrate that CompAir achieves 1.83-7.98$ imes$ prefill and 1.95-6.28$ imes$ decode improvement over the current state-of-the-art fully PIM architecture. Compared to the hybrid A100 and HBM-PIM system, CompAir achieves 3.52$ imes$ energy consumption reduction with comparable throughput. This work represents the first systematic exploration of hybrid DRAM-PIM and SRAM-PIM architectures with in-network computation capabilities, offering a high-efficiency solution for LLM.
Problem

Research questions and friction points this paper is trying to address.

Overcoming memory wall bottleneck in LLM inference
Balancing flexibility and efficiency in PIM architectures
Integrating heterogeneous PIM types with in-network computation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Hybrid DRAM-SRAM PIM with bonding
NoC with embedded ALU computation
Hierarchical ISA for programmability
πŸ”Ž Similar Papers
No similar papers found.
H
Hongyi Li
Center for Brain-Inspired Computing Research, Tsinghua University, Beijing, China
S
Songchen Ma
The Hong Kong University of Science and Technology, Hong Kong, China
H
Huanyu Qu
University of Macau, Guangdong Institute of Intelligence Science and Technology, Macau, China
W
Weihao Zhang
Center for Brain-Inspired Computing Research, Tsinghua University, Beijing, China
J
Jia Chen
The Hong Kong University of Science and Technology, Hong Kong, China
Junfeng Lin
Junfeng Lin
Tsinghua Univercity
Parallel ComputingMachine Learning System
Fengbin Tu
Fengbin Tu
Assistant Professor at HKUST
AI ChipComputing-in-MemoryComputer ArchitectureReconfigurable Computing
R
Rong Zhao
Center for Brain-Inspired Computing Research, Tsinghua University, Beijing, China