Sigma: Differential Rescaling of Query, Key and Value for Efficient Language Models

📅 2025-01-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address inefficiencies in Q/K/V computation, slow long-context inference, and weak domain-specific modeling of large language models (LLMs) in systems research, this paper proposes DiffQKV—a novel attention mechanism that differentially scales the dimensionality and precision of Query, Key, and Value tensors: expanding Query head dimensions to enhance representational capacity while compressing Key and Value to improve computational efficiency. We introduce AIMicius, the first comprehensive benchmark for systems-domain LLM evaluation, and conduct 6-trillion-token domain-adapted pretraining—including 19.5 billion real-world system logs and 1 trillion synthetically generated tokens. Experiments show that DiffQKV achieves a 33.36% speedup in long-context inference over grouped-query attention (GQA) baselines and delivers a 52.5-percentage-point absolute improvement on systems tasks—surpassing GPT-4 across all evaluated metrics—while retaining state-of-the-art general-purpose capabilities. Our core contributions are the first systems-domain–specific differential QKV design and a scalable, data-rich paradigm for domain adaptation of LLMs.

Technology Category

Application Category

📝 Abstract
We introduce Sigma, an efficient large language model specialized for the system domain, empowered by a novel architecture including DiffQKV attention, and pre-trained on our meticulously collected system domain data. DiffQKV attention significantly enhances the inference efficiency of Sigma by optimizing the Query (Q), Key (K), and Value (V) components in the attention mechanism differentially, based on their varying impacts on the model performance and efficiency indicators. Specifically, we (1) conduct extensive experiments that demonstrate the model's varying sensitivity to the compression of K and V components, leading to the development of differentially compressed KV, and (2) propose augmented Q to expand the Q head dimension, which enhances the model's representation capacity with minimal impacts on the inference speed. Rigorous theoretical and empirical analyses reveal that DiffQKV attention significantly enhances efficiency, achieving up to a 33.36% improvement in inference speed over the conventional grouped-query attention (GQA) in long-context scenarios. We pre-train Sigma on 6T tokens from various sources, including 19.5B system domain data that we carefully collect and 1T tokens of synthesized and rewritten data. In general domains, Sigma achieves comparable performance to other state-of-arts models. In the system domain, we introduce the first comprehensive benchmark AIMicius, where Sigma demonstrates remarkable performance across all tasks, significantly outperforming GPT-4 with an absolute improvement up to 52.5%.
Problem

Research questions and friction points this paper is trying to address.

Large Model Optimization
System Domain Performance
Query Efficiency
Innovation

Methods, ideas, or system contributions that make the work stand out.

Sigma Model
DiffQKV Attention
System Domain Optimization
🔎 Similar Papers
No similar papers found.
Zhenghao Lin
Zhenghao Lin
MSRA
NLP
Z
Zihao Tang
Microsoft S IGMA Team
X
Xiao Liu
Microsoft S IGMA Team
Yeyun Gong
Yeyun Gong
Microsoft Research Asia
Natural Language GenerationQuestion AnsweringPre-training
Y
Yi Cheng
Microsoft S IGMA Team
Q
Qi Chen
Microsoft S IGMA Team
H
Hang Li
Microsoft S IGMA Team
Y
Ying Xin
Microsoft S IGMA Team
Ziyue Yang
Ziyue Yang
PhD of Chemical Engineering, University of Rochester
BiomoleculesMachine learning
Kailai Yang
Kailai Yang
The University of Manchester
Natural Language ProcessingLarge Language Models
Y
Yu Yan
Microsoft S IGMA Team
X
Xiao Liang
Microsoft S IGMA Team
S
Shuai Lu
Microsoft S IGMA Team
Y
Yiming Huang
Microsoft S IGMA Team
Zheheng Luo
Zheheng Luo
NaCTeM, University of Manchester
Natural Language Processing
L
L. Qu
Microsoft S IGMA Team
X
Xuan Feng
Microsoft S IGMA Team
Yaoxiang Wang
Yaoxiang Wang
Xiamen University
large language model
Yuqing Xia
Yuqing Xia
Microsoft Research
Systems for Machine LearningGPU
F
Feiyang Chen
Microsoft S IGMA Team
Y
Yuting Jiang
Microsoft S IGMA Team
Y
Yasen Hu
Microsoft S IGMA Team
H
Hao Ni
Microsoft S IGMA Team
B
Binyang Li
Microsoft S IGMA Team
Guoshuai Zhao
Guoshuai Zhao
Xi'an Jiaotong University
Recommender SystemNatural Language GenerationData MiningMachine Learning
Jui-Hao Chiang
Jui-Hao Chiang
Microsoft S IGMA Team
Z
Zhongxin Guo
Microsoft S IGMA Team
C
Chen Lin
Microsoft S IGMA Team
Kun Kuang
Kun Kuang
Zhejiang University
Causal InferenceData MiningMachine Learning
W
Wenjie Li
Microsoft S IGMA Team
Yelong Shen
Yelong Shen
Microsoft
NLPMachine Learning
J
Jian Jiao
Microsoft S IGMA Team
P
Peng Cheng
Microsoft S IGMA Team
M
Mao Yang
Microsoft S IGMA Team