Qwen2.5 Technical Report

📅 2024-12-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) face inherent trade-offs among multitask capability, generalization, and deployment efficiency. Method: We introduce the Qwen2.5 series—comprising dense and Mixture-of-Experts (MoE) architectures across parameter scales (7B–72B)—designed for general understanding, mathematical reasoning, code generation, and multimodal tasks. Training employs 18 trillion tokens of pretraining, followed by million-scale high-quality supervised fine-tuning (SFT) and multi-stage reinforcement learning from human feedback (RLHF), enhancing long-context generation, structured reasoning, and instruction following. Additional optimizations include MoE sparsity, quantization, and instruction alignment. Results: Qwen2.5-72B-Instruct outperforms most open-source and several closed-source models on major benchmarks. Turbo/Plus variants achieve comparable performance to GPT-4o-mini at significantly lower inference cost. The series underpins specialized models—including Qwen2.5-Math and Qwen2.5-Coder—demonstrating a unified advancement in both capability and cost-efficiency.

Technology Category

Application Category

📝 Abstract
In this report, we introduce Qwen2.5, a comprehensive series of large language models (LLMs) designed to meet diverse needs. Compared to previous iterations, Qwen 2.5 has been significantly improved during both the pre-training and post-training stages. In terms of pre-training, we have scaled the high-quality pre-training datasets from the previous 7 trillion tokens to 18 trillion tokens. This provides a strong foundation for common sense, expert knowledge, and reasoning capabilities. In terms of post-training, we implement intricate supervised finetuning with over 1 million samples, as well as multistage reinforcement learning. Post-training techniques enhance human preference, and notably improve long text generation, structural data analysis, and instruction following. To handle diverse and varied use cases effectively, we present Qwen2.5 LLM series in rich sizes. Open-weight offerings include base and instruction-tuned models, with quantized versions available. In addition, for hosted solutions, the proprietary models currently include two mixture-of-experts (MoE) variants: Qwen2.5-Turbo and Qwen2.5-Plus, both available from Alibaba Cloud Model Studio. Qwen2.5 has demonstrated top-tier performance on a wide range of benchmarks evaluating language understanding, reasoning, mathematics, coding, human preference alignment, etc. Specifically, the open-weight flagship Qwen2.5-72B-Instruct outperforms a number of open and proprietary models and demonstrates competitive performance to the state-of-the-art open-weight model, Llama-3-405B-Instruct, which is around 5 times larger. Qwen2.5-Turbo and Qwen2.5-Plus offer superior cost-effectiveness while performing competitively against GPT-4o-mini and GPT-4o respectively. Additionally, as the foundation, Qwen2.5 models have been instrumental in training specialized models such as Qwen2.5-Math, Qwen2.5-Coder, QwQ, and multimodal models.
Problem

Research questions and friction points this paper is trying to address.

Large Language Model
Multi-ability Enhancement
Cost-effectiveness Optimization
Innovation

Methods, ideas, or system contributions that make the work stand out.

Large Language Model
Multi-stage Learning
Performance Efficiency
🔎 Similar Papers
No similar papers found.
Q
Qwen An Yang
Baosong Yang
Baosong Yang
Alibaba-inc
Machine LearningLarge Language ModelMachine Translation
B
Beichen Zhang
Binyuan Hui
Binyuan Hui
Qwen Team, Alibaba Group
Large Language ModelsCodeLLMsReasoningAgent
B
Bo Zheng
Bowen Yu
Bowen Yu
Qwen Team, Alibaba Group
Post-trainingFoundation Model
C
Chengyuan Li
D
Dayiheng Liu
F
Fei Huang
H
Haoran Wei
H
Huan Lin
J
Jian Yang
J
Jianhong Tu
J
Jianwei Zhang
J
Jianxin Yang
J
Jiaxin Yang
Jingren Zhou
Jingren Zhou
Alibaba Group, Microsoft
Cloud ComputingLarge Scale Distributed SystemsMachine LearningQuery ProcessingQuery
Junyang Lin
Junyang Lin
Qwen Team, Alibaba Group & Peking University
Natural Language ProcessingCross-Modal Representation LearningPretraining
K
Kai Dang
K
Keming Lu
Keqin Bao
Keqin Bao
University of Science and Technology of China
Large Language ModelsRecommender Systems
K
Kexin Yang
L
Le Yu
M
Mei Li
Mingfeng Xue
Mingfeng Xue
Unknown affiliation
P
Pei Zhang
Q
Qin Zhu
Rui Men
Rui Men
Qwen Team, Alibaba Group & Peking University
NLP
Runji Lin
Runji Lin
Institute of Automation, Chinese Academy of Sciences
Reinforcement LearningMulti-Agent SystemLarge Language Model
T
Tianhao Li
Tingyu Xia
Tingyu Xia
JiLin University
Text Classification
X
Xingzhang Ren
X
Xuancheng Ren
Yang Fan
Yang Fan
University of Science and Technology of China
Learning to TeachAutomated Machine LearningNeural Architecture SearchNatural Language ProcessingAI for Medicine
Yang Su
Yang Su
King's College London
Y
Yi-Chao Zhang
Y
Yu Wan
Y
Yuqi Liu
Zeyu Cui
Zeyu Cui
Institute of Automation, Chinese Academy of Sciences
Code GenerationLLMRecommendation System
Zhenru Zhang
Zhenru Zhang
Qwen Team, Alibaba Group
Large Language Model
Zihan Qiu
Zihan Qiu
Qwen Team, Alibaba Group & IIIS, Tsinghua University
Mixture of ExpertsModular Deep LearningInterpretability