Exploring Polyglot Harmony: On Multilingual Data Allocation for Large Language Models Pretraining

๐Ÿ“… 2025-09-18
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Multi-lingual data mixing ratios in large language model (LLM) pretraining lack theoretical grounding, leading to imbalanced cross-lingual performance. Method: We propose Clime, the first framework to explicitly model implicit inter-lingual interactions, decomposing optimal data allocation into two interpretable, differentiable subproblems: marginal benefit equalization and embedding vector magnitude maximization. Clime requires no manual hyperparameter tuning or auxiliary supervision signalsโ€”unlike heuristic or statistics-driven approaches. Results: Experiments on XGLM and BLOOM benchmarks show that Clime achieves state-of-the-art performance on multilingual understanding tasks (e.g., XTREME, XNLI) using only 60%โ€“80% of the training tokens required by baseline models of comparable scale. It significantly improves training efficiency and enhances cross-lingual generalization robustness.

Technology Category

Application Category

๐Ÿ“ Abstract
Large language models (LLMs) have become integral to a wide range of applications worldwide, driving an unprecedented global demand for effective multilingual capabilities. Central to achieving robust multilingual performance is the strategic allocation of language proportions within training corpora. However, determining optimal language ratios is highly challenging due to intricate cross-lingual interactions and sensitivity to dataset scale. This paper introduces Climb (Cross-Lingual Interaction-aware Multilingual Balancing), a novel framework designed to systematically optimize multilingual data allocation. At its core, Climb introduces a cross-lingual interaction-aware language ratio, explicitly quantifying each language's effective allocation by capturing inter-language dependencies. Leveraging this ratio, Climb proposes a principled two-step optimization procedure--first equalizing marginal benefits across languages, then maximizing the magnitude of the resulting language allocation vectors--significantly simplifying the inherently complex multilingual optimization problem. Extensive experiments confirm that Climb can accurately measure cross-lingual interactions across various multilingual settings. LLMs trained with Climb-derived proportions consistently achieve state-of-the-art multilingual performance, even achieving competitive performance with open-sourced LLMs trained with more tokens.
Problem

Research questions and friction points this paper is trying to address.

Optimizing multilingual data allocation for LLM pretraining
Quantifying cross-lingual interactions and language dependencies
Maximizing multilingual performance through strategic language ratios
Innovation

Methods, ideas, or system contributions that make the work stand out.

Cross-Lingual Interaction-aware Multilingual Balancing framework
Two-step optimization procedure for language allocation
Quantifying effective allocation via inter-language dependencies
๐Ÿ”Ž Similar Papers
No similar papers found.