🤖 AI Summary
Large language model (LLM) decoding suffers from low efficiency on heterogeneous hardware, particularly due to the high memory footprint of attention operators and their poor alignment with modern accelerator architectures, severely limiting throughput for long-context inference.
Method: This paper proposes a model-attention decoupling architecture—the first to physically separate attention modules from other computational components across devices in Transformer models—offloading memory-intensive attention computation to cost-effective, memory-optimized hardware while retaining compute-heavy layers on high-end accelerators. We implement this via a holistic heterogeneous system design, customized attention offloading, low-overhead cross-device communication, and RDMA-accelerated data distribution, realizing the Lamina system.
Contribution/Results: Experiments demonstrate 16.1%–90.1% throughput improvement over homogeneous deployments at comparable hardware cost, effectively breaking the throughput bottleneck of monolithic execution and validating the feasibility and practicality of attention decoupling in real-world GPU clusters.
📝 Abstract
Transformer-based large language models (LLMs) exhibit impressive performance in generative tasks but also introduce significant challenges in real-world serving due to inefficient use of the expensive, computation-optimized accelerators. Although disaggregated serving architectures have been proposed to split different phases of LLM inference, the efficiency of decoding phase is still low. This is caused by the varying resource demands of different operators in the transformer-based LLMs. Specifically, the attention operator is memory-intensive, exhibiting a memory access pattern that clashes with the strengths of modern accelerators, especially for long context requests. To enhance the efficiency of LLM decoding, we introduce model-attention disaggregation. This approach leverages a collection of cheap, memory-optimized devices for the attention operator while still utilizing high-end accelerators for other parts of the model. This heterogeneous setup ensures that each component is tailored to its specific workload, maximizing overall performance and cost efficiency. Our comprehensive analysis and experiments confirm the viability of splitting the attention computation over multiple devices. Also, the communication bandwidth required between heterogeneous devices proves to be manageable with prevalent networking technologies. To further validate our theory, we develop and deploy Lamina, an LLM inference system that incorporates model-attention disaggregation in a distributed heterogeneous cluster. Experimental results indicate that Lamina can provide 16.1 ~ 90.1% higher estimated throughput than existing solutions with similar costs.