ELO: Efficient Layer-Specific Optimization for Continual Pretraining of Multilingual LLMs

📅 2026-01-07
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the high computational cost and source-language performance degradation commonly encountered in continual pretraining of multilingual large language models. The authors propose an efficient hierarchical optimization approach that, for the first time, identifies and leverages the critical roles of the initial and final layers in cross-lingual transfer. By selectively decoupling and pretraining only these key layers—combined with a layer alignment fine-tuning strategy—the method substantially reduces the number of trainable parameters and computational overhead. Experiments demonstrate up to a 6.2% performance gain on target languages, a training speedup of up to 6.46×, and effective preservation of source-language capabilities, such as English proficiency.

Technology Category

Application Category

📝 Abstract
We propose an efficient layer-specific optimization (ELO) method designed to enhance continual pretraining (CP) for specific languages in multilingual large language models (MLLMs). This approach addresses the common challenges of high computational cost and degradation of source language performance associated with traditional CP. The ELO method consists of two main stages: (1) ELO Pretraining, where a small subset of specific layers, identified in our experiments as the critically important first and last layers, are detached from the original MLLM and trained with the target language. This significantly reduces not only the number of trainable parameters but also the total parameters computed during the forward pass, minimizing GPU memory consumption and accelerating the training process. (2) Layer Alignment, where the newly trained layers are reintegrated into the original model, followed by a brief full fine-tuning step on a small dataset to align the parameters. Experimental results demonstrate that the ELO method achieves a training speedup of up to 6.46 times compared to existing methods, while improving target language performance by up to 6.2\% on qualitative benchmarks and effectively preserving source language (English) capabilities.
Problem

Research questions and friction points this paper is trying to address.

continual pretraining
multilingual LLMs
computational cost
language performance degradation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Efficient Layer-Specific Optimization
Continual Pretraining
Multilingual LLMs
Parameter-Efficient Training
Layer Alignment
🔎 Similar Papers
No similar papers found.