MuRating: A High Quality Data Selecting Approach to Multilingual Large Language Model Pretraining

📅 2025-07-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current data filtering methods for large language models exhibit strong English bias, leading to imbalanced multilingual pretraining data quality. To address this, we propose MuRating—the first unified data quality evaluation framework tailored for multilingual LLM pretraining. Its core innovation lies in cross-lingually projecting fine-grained quality signals (e.g., factual consistency, information density) from high-quality English corpora onto 17 target languages via translation-based signal transfer, enabling annotation-free multilingual scoring. MuRating aggregates outputs from multiple English-only scorers using pairwise comparison and jointly trains a multilingual evaluator on monolingual, cross-lingual, and parallel corpora. Evaluated on a 1.2B-parameter LLaMA backbone, MuRating significantly improves accuracy on both English and multilingual benchmarks—especially on knowledge-intensive tasks—outperforming strong baselines including QuRater and AskLLM.

Technology Category

Application Category

📝 Abstract
Data quality is a critical driver of large language model performance, yet existing model-based selection methods focus almost exclusively on English. We introduce MuRating, a scalable framework that transfers high-quality English data-quality signals into a single rater for 17 target languages. MuRating aggregates multiple English "raters" via pairwise comparisons to learn unified document-quality scores,then projects these judgments through translation to train a multilingual evaluator on monolingual, cross-lingual, and parallel text pairs. Applied to web data, MuRating selects balanced subsets of English and multilingual content to pretrain a 1.2 B-parameter LLaMA model. Compared to strong baselines, including QuRater, AskLLM, DCLM and so on, our approach boosts average accuracy on both English benchmarks and multilingual evaluations, with especially large gains on knowledge-intensive tasks. We further analyze translation fidelity, selection biases, and underrepresentation of narrative material, outlining directions for future work.
Problem

Research questions and friction points this paper is trying to address.

Develop multilingual data quality rater for non-English languages
Transfer English data-quality signals to 17 target languages
Improve multilingual model pretraining via balanced data selection
Innovation

Methods, ideas, or system contributions that make the work stand out.

Transfers English quality signals to multilingual rater
Aggregates multiple raters via pairwise comparisons
Projects judgments through translation for training
🔎 Similar Papers
No similar papers found.