π€ AI Summary
This work addresses the prohibitively high training and inference costs of large language models (LLMs) for long-context tasks. We propose the first scaling law that jointly incorporates context length and the number of query/key-value heads in grouped-query attention (GQA), enabling holistic optimization of both training and inference efficiency. Through systematic empirical analysis, computational and memory cost modeling, and evaluation on long-context benchmarks, we discover that βlarger models with fewer attention headsβ consistently reduce both loss and FLOPs/KV-cache overhead for long sequences. Experiments demonstrate up to a 37% reduction in inference and training compute cost at equivalent performance. All code and data are publicly released, providing both theoretical foundations and practical design principles for efficient long-context LLMs.
π Abstract
Building effective and efficient Transformer-based large language models (LLMs) has recently become a research focus, requiring maximizing model language capabilities and minimizing training and deployment costs. Existing efforts have primarily described complex relationships among model performance, parameter size, and data size, as well as searched for the optimal compute allocation to train LLMs. However, they overlook the impacts of context length and attention head configuration (the number of query and key-value heads in grouped-query attention) on training and inference. In this paper, we systematically compare models with different parameter sizes, context lengths, and attention head configurations in terms of model performance, computational cost, and memory cost. Then, we extend the existing scaling methods, which are based solely on parameter size and training compute, to guide the construction of cost-optimal LLMs during both training and inference. Our quantitative scaling studies show that, when processing sufficiently long sequences, a larger model with fewer attention heads can achieve a lower loss while incurring lower computational and memory costs. Our findings provide valuable insights for developing practical LLMs, especially in long-context processing scenarios. We will publicly release our code and data.