Qwen2.5-1M Technical Report

📅 2025-01-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) suffer from weak long-context modeling, high inference costs, and degraded short-context performance when extended to longer sequences. Method: We propose an efficient LLM series supporting up to 1 million tokens of context. Our approach introduces the first training-free context-length extrapolation technique—enabling >4× context extension—integrates sparse attention with block-wise prefill optimization, and designs a multi-level inference engine featuring GPU kernel optimization, pipelined parallelism, and dynamic scheduling. Long-context capability is enhanced via synthetic long-sequence data generation, progressive pretraining, and multi-stage supervised fine-tuning. Results: Qwen2.5-14B-Instruct-1M significantly outperforms GPT-4o-mini on long-context benchmarks, supporting 8× longer contexts. In million-token scenarios, prefill throughput improves by 3–7×, while short-context accuracy remains uncompromised. The code and models are publicly released.

Technology Category

Application Category

📝 Abstract
We introduce Qwen2.5-1M, a series of models that extend the context length to 1 million tokens. Compared to the previous 128K version, the Qwen2.5-1M series have significantly enhanced long-context capabilities through long-context pre-training and post-training. Key techniques such as long data synthesis, progressive pre-training, and multi-stage supervised fine-tuning are employed to effectively enhance long-context performance while reducing training costs. To promote the use of long-context models among a broader user base, we present and open-source our inference framework. This framework includes a length extrapolation method that can expand the model context lengths by at least four times, or even more, without additional training. To reduce inference costs, we implement a sparse attention method along with chunked prefill optimization for deployment scenarios and a sparsity refinement method to improve precision. Additionally, we detail our optimizations in the inference engine, including kernel optimization, pipeline parallelism, and scheduling optimization, which significantly enhance overall inference performance. By leveraging our inference framework, the Qwen2.5-1M models achieve a remarkable 3x to 7x prefill speedup in scenarios with 1 million tokens of context. This framework provides an efficient and powerful solution for developing applications that require long-context processing using open-source models. The Qwen2.5-1M series currently includes the open-source models Qwen2.5-7B-Instruct-1M and Qwen2.5-14B-Instruct-1M, as well as the API-accessed model Qwen2.5-Turbo. Evaluations show that Qwen2.5-1M models have been greatly improved in long-context tasks without compromising performance in short-context scenarios. Specifically, the Qwen2.5-14B-Instruct-1M model significantly outperforms GPT-4o-mini in long-context tasks and supports contexts eight times longer.
Problem

Research questions and friction points this paper is trying to address.

Long Sentence Processing
Language Model Performance
Cost and Speed Optimization
Innovation

Methods, ideas, or system contributions that make the work stand out.

Long-sequence Processing
Sparse Attention
Multi-stage Fine-tuning
🔎 Similar Papers
No similar papers found.
An Yang
An Yang
Qwen Team, Peking University
Nature Language Processing (NLP)
Bowen Yu
Bowen Yu
Qwen Team, Alibaba Group
Post-trainingFoundation Model
C
Chengyuan Li
Qwen Team, Alibaba Group
D
Dayiheng Liu
Qwen Team, Alibaba Group
F
Fei Huang
Qwen Team, Alibaba Group
H
Haoyan Huang
Qwen Team, Alibaba Group
J
Jiandong Jiang
Qwen Team, Alibaba Group
J
Jianhong Tu
Qwen Team, Alibaba Group
J
Jianwei Zhang
Qwen Team, Alibaba Group
Jingren Zhou
Jingren Zhou
Alibaba Group, Microsoft
Cloud ComputingLarge Scale Distributed SystemsMachine LearningQuery ProcessingQuery
Junyang Lin
Junyang Lin
Qwen Team, Alibaba Group & Peking University
Natural Language ProcessingCross-Modal Representation LearningPretraining
K
Kai Dang
Qwen Team, Alibaba Group
K
Kexin Yang
Qwen Team, Alibaba Group
L
Le Yu
Qwen Team, Alibaba Group
M
Mei Li
Qwen Team, Alibaba Group
M
Minmin Sun
Qwen Team, Alibaba Group
Q
Qin Zhu
Qwen Team, Alibaba Group
Rui Men
Rui Men
Qwen Team, Alibaba Group & Peking University
NLP
T
Tao He
Qwen Team, Alibaba Group
W
Weijia Xu
Qwen Team, Alibaba Group
Wenbiao Yin
Wenbiao Yin
Tongyi Lab, Alibaba Group
LLMAgentRAG
Wenyuan Yu
Wenyuan Yu
Alibaba Group
Graph computationdata managementdistributed systems and parallel computation
X
Xiafei Qiu
Qwen Team, Alibaba Group
X
Xingzhang Ren
Qwen Team, Alibaba Group
Xinlong Yang
Xinlong Yang
Peking University | Chongqing University
Multi-modal LearningLarge Language Model
Y
Yong Li
Qwen Team, Alibaba Group
Z
Zhiying Xu
Qwen Team, Alibaba Group
Z
Zipeng Zhang
Qwen Team, Alibaba Group