Checkpoint Merging via Bayesian Optimization in LLM Pretraining

📅 2024-03-28
🏛️ arXiv.org
📈 Citations: 17
Influential: 1
📄 PDF
🤖 AI Summary
To address the high computational and environmental costs of large language model (LLM) pretraining, this paper proposes a Bayesian optimization–based checkpoint weighting and merging method that automatically identifies optimal fusion weights across multiple checkpoints sharing the same training trajectory. Our approach is the first to apply Bayesian optimization to LLM checkpoint merging, treating pretraining loss as a black-box objective function—enabling strong cross-task and cross-domain generalization without domain-specific fine-tuning. Experiments demonstrate that the method significantly outperforms single-checkpoint baselines on multi-task evaluation benchmarks. It achieves performance gains comparable to extended training, yet incurs only minimal additional overhead. Moreover, it exhibits low dependence on held-out data and robust generalization across diverse downstream tasks and domains.

Technology Category

Application Category

📝 Abstract
The rapid proliferation of large language models (LLMs) such as GPT-4 and Gemini underscores the intense demand for resources during their training processes, posing significant challenges due to substantial computational and environmental costs. To alleviate this issue, we propose checkpoint merging in pretraining LLM. This method utilizes LLM checkpoints with shared training trajectories, and is rooted in an extensive search space exploration for the best merging weight via Bayesian optimization. Through various experiments, we demonstrate that: (1) Our proposed methodology exhibits the capacity to augment pretraining, presenting an opportunity akin to obtaining substantial benefits at minimal cost; (2) Our proposed methodology, despite requiring a given held-out dataset, still demonstrates robust generalization capabilities across diverse domains, a pivotal aspect in pretraining.
Problem

Research questions and friction points this paper is trying to address.

Reducing computational and environmental costs in LLM pretraining
Optimizing checkpoint merging via Bayesian exploration
Enhancing generalization across domains with minimal resources
Innovation

Methods, ideas, or system contributions that make the work stand out.

Checkpoint merging in LLM pretraining
Bayesian optimization for weight search
Robust generalization across diverse domains
🔎 Similar Papers
No similar papers found.
D
Deyuan Liu
Harbin Institute of Technology
Z
Zecheng Wang
Harbin Institute of Technology
Bingning Wang
Bingning Wang
Baichuan Inc.
NLPQuestion AnsweringLarge language model
W
Weipeng Chen
Baichuan Inc
C
Chunshan Li
Harbin Institute of Technology
Zhiying Tu
Zhiying Tu
Harbin Institute of Technology
software engineering
D
Dianhui Chu
Harbin Institute of Technology
B
Bo Li
Peking University
Dianbo Sui
Dianbo Sui
Harbin Institute of Technology