🤖 AI Summary
Current data filtering methods for large language models exhibit strong English bias, leading to imbalanced multilingual pretraining data quality. To address this, we propose MuRating—the first unified data quality evaluation framework tailored for multilingual LLM pretraining. Its core innovation lies in cross-lingually projecting fine-grained quality signals (e.g., factual consistency, information density) from high-quality English corpora onto 17 target languages via translation-based signal transfer, enabling annotation-free multilingual scoring. MuRating aggregates outputs from multiple English-only scorers using pairwise comparison and jointly trains a multilingual evaluator on monolingual, cross-lingual, and parallel corpora. Evaluated on a 1.2B-parameter LLaMA backbone, MuRating significantly improves accuracy on both English and multilingual benchmarks—especially on knowledge-intensive tasks—outperforming strong baselines including QuRater and AskLLM.
📝 Abstract
Data quality is a critical driver of large language model performance, yet existing model-based selection methods focus almost exclusively on English. We introduce MuRating, a scalable framework that transfers high-quality English data-quality signals into a single rater for 17 target languages. MuRating aggregates multiple English "raters" via pairwise comparisons to learn unified document-quality scores,then projects these judgments through translation to train a multilingual evaluator on monolingual, cross-lingual, and parallel text pairs. Applied to web data, MuRating selects balanced subsets of English and multilingual content to pretrain a 1.2 B-parameter LLaMA model. Compared to strong baselines, including QuRater, AskLLM, DCLM and so on, our approach boosts average accuracy on both English benchmarks and multilingual evaluations, with especially large gains on knowledge-intensive tasks. We further analyze translation fidelity, selection biases, and underrepresentation of narrative material, outlining directions for future work.