Cost-Optimal Grouped-Query Attention for Long-Context LLMs

πŸ“… 2025-03-12
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work addresses the prohibitively high training and inference costs of large language models (LLMs) for long-context tasks. We propose the first scaling law that jointly incorporates context length and the number of query/key-value heads in grouped-query attention (GQA), enabling holistic optimization of both training and inference efficiency. Through systematic empirical analysis, computational and memory cost modeling, and evaluation on long-context benchmarks, we discover that β€œlarger models with fewer attention heads” consistently reduce both loss and FLOPs/KV-cache overhead for long sequences. Experiments demonstrate up to a 37% reduction in inference and training compute cost at equivalent performance. All code and data are publicly released, providing both theoretical foundations and practical design principles for efficient long-context LLMs.

Technology Category

Application Category

πŸ“ Abstract
Building effective and efficient Transformer-based large language models (LLMs) has recently become a research focus, requiring maximizing model language capabilities and minimizing training and deployment costs. Existing efforts have primarily described complex relationships among model performance, parameter size, and data size, as well as searched for the optimal compute allocation to train LLMs. However, they overlook the impacts of context length and attention head configuration (the number of query and key-value heads in grouped-query attention) on training and inference. In this paper, we systematically compare models with different parameter sizes, context lengths, and attention head configurations in terms of model performance, computational cost, and memory cost. Then, we extend the existing scaling methods, which are based solely on parameter size and training compute, to guide the construction of cost-optimal LLMs during both training and inference. Our quantitative scaling studies show that, when processing sufficiently long sequences, a larger model with fewer attention heads can achieve a lower loss while incurring lower computational and memory costs. Our findings provide valuable insights for developing practical LLMs, especially in long-context processing scenarios. We will publicly release our code and data.
Problem

Research questions and friction points this paper is trying to address.

Optimize Transformer-based LLMs for long-context processing efficiency.
Analyze impact of context length and attention head configuration.
Develop cost-effective scaling methods for LLM training and inference.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Grouped-query attention optimizes long-context LLMs.
Larger models with fewer heads reduce computational costs.
Extended scaling methods guide cost-optimal LLM construction.
πŸ”Ž Similar Papers
No similar papers found.
Yingfa Chen
Yingfa Chen
PhD at Tsinghua University
machine learninglong-context modelinglanguage modeling
Y
Yutong Wu
SIST, University of Science and Technology Beijing, Beijing, China
X
Xu Han
NLP Group, DCST, IAI, BNRIST, Tsinghua University, Beijing, China
Z
Zhiyuan Liu
NLP Group, DCST, IAI, BNRIST, Tsinghua University, Beijing, China
Maosong Sun
Maosong Sun
Professor of Computer Science and Technology, Tsinghua University
Natural Language ProcessingArtificial IntelligenceSocial Computing