DeepSeek: Paradigm Shifts and Technical Evolution in Large AI Models

📅 2025-07-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the core challenges of high computational cost and closed ecosystems hindering large language model (LLM) development. Methodologically, it introduces a novel LLM paradigm characterized by low cost, high performance, and open-source accessibility, achieved through an integrated approach combining multi-head latent attention (MLA), mixture-of-experts (MoE), multi-token prediction (MTP), and group relative policy optimization (GRPO), augmented by system-level engineering optimizations for end-to-end improvements in training acceleration, inference efficiency, and scalable model design. Key contributions include the open release of the DeepSeek-V3 and R1 model families, which match state-of-the-art proprietary models across multiple benchmarks while substantially reducing training and inference costs. The study systematically analyzes architectural distinctions from mainstream designs, fostering algorithm–architecture–systems co-innovation, accelerating the open LLM ecosystem, and reshaping the global AI competitive landscape.

Technology Category

Application Category

📝 Abstract
DeepSeek, a Chinese Artificial Intelligence (AI) startup, has released their V3 and R1 series models, which attracted global attention due to their low cost, high performance, and open-source advantages. This paper begins by reviewing the evolution of large AI models focusing on paradigm shifts, the mainstream Large Language Model (LLM) paradigm, and the DeepSeek paradigm. Subsequently, the paper highlights novel algorithms introduced by DeepSeek, including Multi-head Latent Attention (MLA), Mixture-of-Experts (MoE), Multi-Token Prediction (MTP), and Group Relative Policy Optimization (GRPO). The paper then explores DeepSeek engineering breakthroughs in LLM scaling, training, inference, and system-level optimization architecture. Moreover, the impact of DeepSeek models on the competitive AI landscape is analyzed, comparing them to mainstream LLMs across various fields. Finally, the paper reflects on the insights gained from DeepSeek innovations and discusses future trends in the technical and engineering development of large AI models, particularly in data, training, and reasoning.
Problem

Research questions and friction points this paper is trying to address.

Analyzing DeepSeek's novel AI algorithms and paradigm shifts
Exploring engineering breakthroughs in LLM scaling and optimization
Assessing DeepSeek's impact on competitive AI landscape
Innovation

Methods, ideas, or system contributions that make the work stand out.

Novel MLA and MoE algorithms enhance performance
MTP and GRPO optimize training efficiency
System-level architecture boosts scaling and inference
🔎 Similar Papers
No similar papers found.
L
Luolin Xiong
Key Laboratory of Smart Manufacturing in Energy Chemical Process, Ministry of Education, and the Engineering Research Center of Process System Engineering, Ministry of Education, East China University of Science and Technology, Shanghai 200237, China
Haofen Wang
Haofen Wang
Tongji University
Knowledge GraphNatural Language ProcessingRetrieval Augmented Generation
X
Xi Chen
Shanghai Key Laboratory of Data Science, School of Computer Science, Fudan University, Shanghai 200433, China
Lu Sheng
Lu Sheng
School of Software, Beihang University
Embodied AI3D VisionMachine Learning
Y
Yun Xiong
Shanghai Key Laboratory of Data Science, School of Computer Science, Fudan University, Shanghai 200433, China
Jingping Liu
Jingping Liu
ECUST
large language modelknowledge graph
Y
Yanghua Xiao
Shanghai Key Laboratory of Data Science, School of Computer Science, Fudan University, Shanghai 200433, China
H
Huajun Chen
AZFT Joint Lab for Knowledge Engine Hangzhou Innovation Center, College of Computer Science and Technology, Zhejiang University, Hangzhou 310058, China
Q
Qing-Long Han
School of Science, Computing and Engineering Technologies, Swinburne University of Technology, Melbourne VIC 3122, Australia
Y
Yang Tang
Key Laboratory of Smart Manufacturing in Energy Chemical Process, Ministry of Education, and the Engineering Research Center of Process System Engineering, Ministry of Education, East China University of Science and Technology, Shanghai 200237, China