OptiMer: Optimal Distribution Vector Merging Is Better than Data Mixing for Continual Pre-Training

📅 2026-03-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the high cost and inefficiency of tuning data mixture ratios in continual pretraining, where suboptimal choices often lead to significant computational waste. The authors propose OptiMer, a method that decouples mixture ratio selection from training by first training separate models on individual datasets and extracting their distribution vectors. This enables formulating the optimal mixing strategy as a post-training Bayesian optimization problem. OptiMer can generate customized models on demand without retraining and incorporates parameter shift modeling to enhance optimization accuracy. Experiments on Gemma-3 27B across multilingual and cross-domain settings demonstrate that OptiMer consistently outperforms baseline data mixing and model averaging approaches while reducing search costs by 15–35×; the resulting mixture weights also effectively guide subsequent full retraining.
📝 Abstract
Continual pre-training is widely used to adapt LLMs to target languages and domains, yet the mixture ratio of training data remains a sensitive hyperparameter that is expensive to tune: they must be fixed before training begins, and a suboptimal choice can waste weeks of compute. In this work, we propose OptiMer, which decouples ratio selection from training: we train one CPT model per dataset, extract each model's distribution vector, which represents the parameter shift induced by that dataset, and search for optimal composition weights post-hoc via Bayesian optimization. Experiments on Gemma 3 27B across languages (Japanese, Chinese) and domains (Math, Code) show that OptiMer consistently outperforms data mixture and model averaging baselines with 15-35 times lower search cost. Key findings reveal that 1) the optimized weights can be interpreted as data mixture ratios, and retraining with these ratios improves data mixture CPT, and 2) the same vector pool can be re-optimized for a given objective without any retraining, producing target-tailored models on demand. Our work establishes that data mixture ratio selection, traditionally a pre-training decision, can be reformulated as a post-hoc optimization over distribution vectors, offering a more flexible paradigm for continual pre-training.
Problem

Research questions and friction points this paper is trying to address.

continual pre-training
data mixture ratio
distribution vector
hyperparameter tuning
large language models
Innovation

Methods, ideas, or system contributions that make the work stand out.

distribution vector merging
continual pre-training
Bayesian optimization
data mixture ratio
post-hoc optimization
🔎 Similar Papers
No similar papers found.
H
Haiyue Song
National Institute of Information and Communications Technology, Kyoto, Japan
Masao Utiyama
Masao Utiyama
NICT
Machine Translation