PIM Is All You Need: A CXL-Enabled GPU-Free System for Large Language Model Inference

📅 2025-02-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address memory-bandwidth bottlenecks, low GPU utilization, and prohibitive costs of deploying million-token-context LLM inference, this paper introduces CENT—the first GPU-free, CXL-native near-memory inference system. Methodologically, CENT (1) establishes a bank-level near-memory compute architecture built on CXL 3.0, enabling a memory-centric execution paradigm driven by KV cache access; (2) designs KV-intensive near-memory parallelism and CXL-native collective communication primitives; and (3) integrates distributed KV caching, CXL peer-to-peer and collective communication, and pipelined model sharding. Experiments demonstrate that, at equal power consumption, CENT achieves 2.3× higher throughput and 2.3× lower energy consumption than GPU-based baselines, while delivering 5.2× greater tokens-per-dollar—enabling efficient, high-concurrency inference for million-token contexts.

Technology Category

Application Category

📝 Abstract
Large Language Model (LLM) inference uses an autoregressive manner to generate one token at a time, which exhibits notably lower operational intensity compared to earlier Machine Learning (ML) models such as encoder-only transformers and Convolutional Neural Networks. At the same time, LLMs possess large parameter sizes and use key-value caches to store context information. Modern LLMs support context windows with up to 1 million tokens to generate versatile text, audio, and video content. A large key-value cache unique to each prompt requires a large memory capacity, limiting the inference batch size. Both low operational intensity and limited batch size necessitate a high memory bandwidth. However, contemporary hardware systems for ML model deployment, such as GPUs and TPUs, are primarily optimized for compute throughput. This mismatch challenges the efficient deployment of advanced LLMs and makes users to pay for expensive compute resources that are poorly utilized for the memory-bound LLM inference tasks. We propose CENT, a CXL-ENabled GPU-Free sysTem for LLM inference, which harnesses CXL memory expansion capabilities to accommodate substantial LLM sizes, and utilizes near-bank processing units to deliver high memory bandwidth, eliminating the need for expensive GPUs. CENT exploits a scalable CXL network to support peer-to-peer and collective communication primitives across CXL devices. We implement various parallelism strategies to distribute LLMs across these devices. Compared to GPU baselines with maximum supported batch sizes and similar average power, CENT achieves 2.3$ imes$ higher throughput and consumes 2.3$ imes$ less energy. CENT enhances the Total Cost of Ownership (TCO), generating 5.2$ imes$ more tokens per dollar than GPUs.
Problem

Research questions and friction points this paper is trying to address.

Addresses inefficiency in large language model inference.
Reduces dependency on expensive GPU resources.
Enhances memory bandwidth for model deployment.
Innovation

Methods, ideas, or system contributions that make the work stand out.

CXL-enabled memory expansion
Near-bank processing units
Scalable CXL network communication
🔎 Similar Papers
No similar papers found.
Y
Yufeng Gu
University of Michigan, Ann Arbor, USA
A
Alireza Khadem
University of Michigan, Ann Arbor, USA
S
Sumanth Umesh
University of Michigan, Ann Arbor, USA
N
Ning Liang
University of Michigan, Ann Arbor, USA
X
Xavier Servot
ETH Zürich, Zürich, Switzerland
O
Onur Mutlu
ETH Zürich, Zürich, Switzerland
Ravi Iyer
Ravi Iyer
Google
Reetuparna Das
Reetuparna Das
University of Michigan
Computer Architecture