The New LLM Bottleneck: A Systems Perspective on Latent Attention and Mixture-of-Experts

📅 2025-07-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Traditional Transformer accelerators face a dichotomy: multi-head attention (MHA) is memory-bound, while feed-forward layers are compute-bound—motivating specialized attention hardware. This work observes that emerging architectures—multi-head latent attention (MLA) and mixture-of-experts (MoE)—eliminate this single-bottleneck paradigm, necessitating holistic system balance. We show that MLA increases attention arithmetic intensity by over two orders of magnitude, rendering it compute-intensive; concurrently, dynamic batch-aware MoE expert scheduling aligns expert computation density with that of dense feed-forward layers. Experiments demonstrate that modern GPUs, high-bandwidth interconnects, and distributed scheduling suffice to support such architectures—obviating dedicated attention accelerators. Our core contribution is the identification and empirical validation of a new “balanced-systems-over-specialized-acceleration” paradigm, providing both theoretical grounding and practical guidelines for hardware-algorithm co-design in large language models. (149 words)

Technology Category

Application Category

📝 Abstract
Computational workloads composing traditional Transformer models are starkly bifurcated. Multi-Head Attention (MHA) is memory-bound, with low arithmetic intensity, while feedforward layers are compute-bound. This dichotomy has long motivated research into specialized hardware to mitigate the MHA bottleneck. This paper argues that recent architectural shifts, namely Multi-head Latent Attention (MLA) and Mixture-of-Experts (MoE), challenge the premise of specialized attention hardware. We make two key observations. First, the arithmetic intensity of MLA is over two orders of magnitude greater than that of MHA, shifting it close to a compute-bound regime well-suited for modern accelerators like GPUs. Second, by distributing MoE experts across a pool of accelerators, their arithmetic intensity can be tuned through batching to match that of the dense layers, creating a more balanced computational profile. These findings reveal a diminishing need for specialized attention hardware. The central challenge for next-generation Transformers is no longer accelerating a single memory-bound layer. Instead, the focus must shift to designing balanced systems with sufficient compute, memory capacity, memory bandwidth, and high-bandwidth interconnects to manage the diverse demands of large-scale models.
Problem

Research questions and friction points this paper is trying to address.

Addressing the memory-bound and compute-bound dichotomy in Transformer models
Challenging the need for specialized attention hardware with MLA and MoE
Designing balanced systems for diverse demands of large-scale models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multi-head Latent Attention increases arithmetic intensity
Mixture-of-Experts balances computational profile via batching
Modern accelerators replace specialized attention hardware
🔎 Similar Papers
No similar papers found.
Sungmin Yun
Sungmin Yun
Seoul National University
Computer ArchitectureComputer SystemsDeep Learning
S
Seonyong Park
Seoul National University, Seoul, South Korea
Hwayong Nam
Hwayong Nam
Seoul National University
Computer ArchitectureDRAMMemory system
Y
Younjoo Lee
Seoul National University, Seoul, South Korea
G
Gunjun Lee
Seoul National University, Seoul, South Korea
Kwanhee Kyung
Kwanhee Kyung
Seoul National University
Computer architecture
S
Sangpyo Kim
Seoul National University, Seoul, South Korea
N
Nam Sung Kim
University of Illinois at Urbana-Champaign, Champaign, Illinois, USA
J
Jongmin Kim
Seoul National University, Seoul, South Korea
Hyungyo Kim
Hyungyo Kim
University of Illinois at Urbana-Champaign
Computer ArchitectureSystemsMemory
J
Juhwan Cho
Seoul National University, Seoul, South Korea
S
Seungmin Baek
Seoul National University, Seoul, South Korea
Jung Ho Ahn
Jung Ho Ahn
Seoul National University
Computer Architecture