🤖 AI Summary
To address inefficiencies in Q/K/V computation, slow long-context inference, and weak domain-specific modeling of large language models (LLMs) in systems research, this paper proposes DiffQKV—a novel attention mechanism that differentially scales the dimensionality and precision of Query, Key, and Value tensors: expanding Query head dimensions to enhance representational capacity while compressing Key and Value to improve computational efficiency. We introduce AIMicius, the first comprehensive benchmark for systems-domain LLM evaluation, and conduct 6-trillion-token domain-adapted pretraining—including 19.5 billion real-world system logs and 1 trillion synthetically generated tokens. Experiments show that DiffQKV achieves a 33.36% speedup in long-context inference over grouped-query attention (GQA) baselines and delivers a 52.5-percentage-point absolute improvement on systems tasks—surpassing GPT-4 across all evaluated metrics—while retaining state-of-the-art general-purpose capabilities. Our core contributions are the first systems-domain–specific differential QKV design and a scalable, data-rich paradigm for domain adaptation of LLMs.
📝 Abstract
We introduce Sigma, an efficient large language model specialized for the system domain, empowered by a novel architecture including DiffQKV attention, and pre-trained on our meticulously collected system domain data. DiffQKV attention significantly enhances the inference efficiency of Sigma by optimizing the Query (Q), Key (K), and Value (V) components in the attention mechanism differentially, based on their varying impacts on the model performance and efficiency indicators. Specifically, we (1) conduct extensive experiments that demonstrate the model's varying sensitivity to the compression of K and V components, leading to the development of differentially compressed KV, and (2) propose augmented Q to expand the Q head dimension, which enhances the model's representation capacity with minimal impacts on the inference speed. Rigorous theoretical and empirical analyses reveal that DiffQKV attention significantly enhances efficiency, achieving up to a 33.36% improvement in inference speed over the conventional grouped-query attention (GQA) in long-context scenarios. We pre-train Sigma on 6T tokens from various sources, including 19.5B system domain data that we carefully collect and 1T tokens of synthesized and rewritten data. In general domains, Sigma achieves comparable performance to other state-of-arts models. In the system domain, we introduce the first comprehensive benchmark AIMicius, where Sigma demonstrates remarkable performance across all tasks, significantly outperforming GPT-4 with an absolute improvement up to 52.5%.