Efficient LLM Inference: Bandwidth, Compute, Synchronization, and Capacity are all you need

📅 2025-07-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study identifies the core performance bottlenecks in Transformer-based large language model (LLM) inference: memory bandwidth saturation, GPU memory capacity constraints, and distributed synchronization overhead. Method: We propose the first hardware-agnostic, general-purpose performance model that quantitatively characterizes how hundred-GB/s memory bandwidth, microsecond-scale interconnect latency, and energy efficiency jointly constrain end-to-end throughput. The model comprehensively covers multi-level memory hierarchies—including HBM3/HBM4, 3D-stacked DRAM, on-die SRAM, and wafer-scale integration—enabling cross-platform architectural evaluation. Contribution/Results: Our analysis reveals that current state-of-the-art systems achieve peak throughput of ~2,000 tokens/sec/user; surpassing 10,000+ tokens/sec/user necessitates algorithmic compression or model distillation. The model establishes theoretically grounded performance bounds and provides empirically verifiable optimization pathways for LLM inference hardware design, system deployment, and hardware-software co-optimization.

Technology Category

Application Category

📝 Abstract
This paper presents a limit study of transformer-based large language model (LLM) inference, focusing on the fundamental performance bottlenecks imposed by memory bandwidth, memory capacity, and synchronization overhead in distributed inference systems. We develop a hardware-agnostic performance model that abstracts away implementation details, enabling the analysis of a wide range of current and near-future hardware technologies. Our analysis spans from current HBM3 memory technology used in AI accelerators like GPUs and TPUs to systems based on advanced HBM4 and advanced 3D-stacked DRAM technology. It also covers SRAM-based designs and scaling techniques from distributed clusters with varying numbers of chips to wafer-scale integration. Our key findings for auto-regressive decoding are: i) serving LLMs requires 100s of GB per server to serve a model instance; ii) high memory bandwidth is critical for high per-user throughput; iii) exposed synchronization latencies to achieve collective communication must be around 1us else they make the memory bandwidth ineffective; iv) DRAM-based designs have a fundamental advantage in terms of system-level efficiency as measured in throughput per cost or watt; and v) hardware designs can easily reach 2000+ user token/sec but getting to 10,000+ tokens/sec will need smaller models, smaller context, or other forms of algorithmic advances. This study provides valuable insights into the fundamental performance limits of LLM inference, highlighting the potential benefits of future hardware advancements and guiding the optimization of LLM deployment strategies.
Problem

Research questions and friction points this paper is trying to address.

Identify performance bottlenecks in LLM inference systems
Analyze hardware-agnostic model for current and future technologies
Evaluate system-level efficiency and scalability of DRAM-based designs
Innovation

Methods, ideas, or system contributions that make the work stand out.

Hardware-agnostic performance model for LLM inference
Analysis spans HBM3 to HBM4 and 3D-stacked DRAM
Focus on memory bandwidth and synchronization overhead
🔎 Similar Papers
No similar papers found.