Model Merging in Pre-training of Large Language Models

📅 2025-05-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work presents the first systematic exploration of model merging techniques embedded within large language model (LLM) pretraining. Addressing the lack of theoretical grounding and empirical validation for checkpoint fusion during pretraining, we propose a pretraining-oriented weight interpolation and gradient-aware fusion algorithm, supporting both Dense and Mixture-of-Experts (MoE) architectures across parameter scales from millions to tens of billions, and enabling cross-stage checkpoint fusion. We further establish a predictive model linking fusion outcomes to learning rate annealing behavior. Key contributions include: (1) demonstrating seamless integration of model fusion into the full pretraining pipeline; (2) deriving principled guidelines for fusion strategy and hyperparameter design; (3) achieving an average +1.8% improvement in downstream task performance across multi-scale models, with up to 37% reduction in training cost; and (4) open-sourcing a reproducible, practical guide for pretraining-aware model fusion.

Technology Category

Application Category

📝 Abstract
Model merging has emerged as a promising technique for enhancing large language models, though its application in large-scale pre-training remains relatively unexplored. In this paper, we present a comprehensive investigation of model merging techniques during the pre-training process. Through extensive experiments with both dense and Mixture-of-Experts (MoE) architectures ranging from millions to over 100 billion parameters, we demonstrate that merging checkpoints trained with constant learning rates not only achieves significant performance improvements but also enables accurate prediction of annealing behavior. These improvements lead to both more efficient model development and significantly lower training costs. Our detailed ablation studies on merging strategies and hyperparameters provide new insights into the underlying mechanisms while uncovering novel applications. Through comprehensive experimental analysis, we offer the open-source community practical pre-training guidelines for effective model merging.
Problem

Research questions and friction points this paper is trying to address.

Explores model merging in large-scale pre-training of LLMs
Investigates performance gains from merging constant-LR checkpoints
Provides guidelines for efficient model merging strategies
Innovation

Methods, ideas, or system contributions that make the work stand out.

Merging checkpoints with constant learning rates
Applying merging to dense and MoE architectures
Providing practical pre-training guidelines
🔎 Similar Papers
No similar papers found.
Yunshui Li
Yunshui Li
Seed Team | Prev. Qwen, SIAT
Natural Language ProcessingMultimodal (Vision-and-Language) Representation Learning
Yiyuan Ma
Yiyuan Ma
Bytedance Seed
S
Shen Yan
ByteDance Seed
C
Chaoyi Zhang
ByteDance Seed
J
Jing Liu
ByteDance Seed
Jianqiao Lu
Jianqiao Lu
Researcher at Seed Foundation Model team
Large language modelOnline Matching Theory
Z
Ziwen Xu
ByteDance Seed
M
Mengzhao Chen
ByteDance Seed
M
Minrui Wang
ByteDance Seed
S
Shiyi Zhan
ByteDance Seed
J
Jin Ma
ByteDance Seed
Xunhao Lai
Xunhao Lai
Peking University
Machine LearningNatural Language ProcessingLarge language model
Y
Yao Luo
ByteDance Seed
X
Xingyan Bin
ByteDance Seed
H
Hongbin Ren
ByteDance Seed
Mingji Han
Mingji Han
Tencent
databasenetworkingsystem
W
Wenhao Hao
ByteDance Seed
Bairen Yi
Bairen Yi
Unknown affiliation
Computer Systems
L
LingJun Liu
ByteDance Seed
B
Bole Ma
ByteDance Seed
X
Xiaoying Jia
ByteDance Seed
Z
Zhou Xun
ByteDance Seed
L
Liang Xiang
ByteDance Seed
Y
Yonghui Wu
ByteDance Seed