Efficient Heterogeneous Large Language Model Decoding with Model-Attention Disaggregation

📅 2024-05-03
📈 Citations: 8
Influential: 2
📄 PDF
🤖 AI Summary
Large language model (LLM) decoding suffers from low efficiency on heterogeneous hardware, particularly due to the high memory footprint of attention operators and their poor alignment with modern accelerator architectures, severely limiting throughput for long-context inference. Method: This paper proposes a model-attention decoupling architecture—the first to physically separate attention modules from other computational components across devices in Transformer models—offloading memory-intensive attention computation to cost-effective, memory-optimized hardware while retaining compute-heavy layers on high-end accelerators. We implement this via a holistic heterogeneous system design, customized attention offloading, low-overhead cross-device communication, and RDMA-accelerated data distribution, realizing the Lamina system. Contribution/Results: Experiments demonstrate 16.1%–90.1% throughput improvement over homogeneous deployments at comparable hardware cost, effectively breaking the throughput bottleneck of monolithic execution and validating the feasibility and practicality of attention decoupling in real-world GPU clusters.

Technology Category

Application Category

📝 Abstract
Transformer-based large language models (LLMs) exhibit impressive performance in generative tasks but also introduce significant challenges in real-world serving due to inefficient use of the expensive, computation-optimized accelerators. Although disaggregated serving architectures have been proposed to split different phases of LLM inference, the efficiency of decoding phase is still low. This is caused by the varying resource demands of different operators in the transformer-based LLMs. Specifically, the attention operator is memory-intensive, exhibiting a memory access pattern that clashes with the strengths of modern accelerators, especially for long context requests. To enhance the efficiency of LLM decoding, we introduce model-attention disaggregation. This approach leverages a collection of cheap, memory-optimized devices for the attention operator while still utilizing high-end accelerators for other parts of the model. This heterogeneous setup ensures that each component is tailored to its specific workload, maximizing overall performance and cost efficiency. Our comprehensive analysis and experiments confirm the viability of splitting the attention computation over multiple devices. Also, the communication bandwidth required between heterogeneous devices proves to be manageable with prevalent networking technologies. To further validate our theory, we develop and deploy Lamina, an LLM inference system that incorporates model-attention disaggregation in a distributed heterogeneous cluster. Experimental results indicate that Lamina can provide 16.1 ~ 90.1% higher estimated throughput than existing solutions with similar costs.
Problem

Research questions and friction points this paper is trying to address.

Improve efficiency of LLM decoding with heterogeneous accelerators
Address memory-intensive attention operator in transformer-based LLMs
Optimize resource use in distributed LLM inference systems
Innovation

Methods, ideas, or system contributions that make the work stand out.

Model-attention disaggregation for efficient decoding
Heterogeneous setup with memory-optimized devices
Distributed system Lamina validates performance gains
🔎 Similar Papers
No similar papers found.
S
Shaoyuan Chen
Tsinghua University
Wencong Xiao
Wencong Xiao
ByteDance
Distributed systemMachine learning systemResource management
Y
Yutong Lin
Tsinghua University
M
Mingxing Zhang
Tsinghua University
Y
Yingdi Shan
Jinlei Jiang
Jinlei Jiang
Department of Computer Science and Technology, Tsinghua University
Cloud ComputingBig DataGrid ComputingCSCW
K
Kang Chen
Y
Yongwei Wu
Tsinghua University