Meta-rater: A Multi-dimensional Data Selection Method for Pre-training Language Models

📅 2025-04-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current large language model (LLM) pretraining datasets suffer from opacity in composition, and existing data filtering methods typically optimize only one dimension—e.g., quality, diversity, or redundancy—failing to holistically assess data quality. Method: We propose PRRC, a four-dimensional evaluation framework (Professionalism, Readability, Reasoning capability, Cleanliness), coupled with Meta-rater, a multidimensional weighted selection method. PRRC introduces the first collaborative assessment system; Meta-rater employs a dynamic weight learning mechanism based on surrogate models and validation loss regression to optimally fuse multiple quality objectives; and we perform 25-dimensional fine-grained annotation. Results: Experiments show that a 1.3B-parameter model trained on PRRC-filtered data achieves 2× faster convergence and +3.23 average points on downstream tasks; a 3.3B model demonstrates strong scalability on 100B tokens; and we open-source SlimPajama-627B, a high-quality subset.

Technology Category

Application Category

📝 Abstract
The composition of pre-training datasets for large language models (LLMs) remains largely undisclosed, hindering transparency and efforts to optimize data quality, a critical driver of model performance. Current data selection methods, such as natural language quality assessments, diversity-based filters, and classifier-based approaches, are limited by single-dimensional evaluation or redundancy-focused strategies. To address these gaps, we propose PRRC to evaluate data quality across Professionalism, Readability, Reasoning, and Cleanliness. We further introduce Meta-rater, a multi-dimensional data selection method that integrates these dimensions with existing quality metrics through learned optimal weightings. Meta-rater employs proxy models to train a regression model that predicts validation loss, enabling the identification of optimal combinations of quality scores. Experiments demonstrate that Meta-rater doubles convergence speed for 1.3B parameter models and improves downstream task performance by 3.23, with scalable benefits observed in 3.3B models trained on 100B tokens. Additionally, we release the annotated SlimPajama-627B dataset, labeled across 25 quality metrics (including PRRC), to advance research in data-centric LLM development. Our work establishes that holistic, multi-dimensional quality integration significantly outperforms conventional single-dimension approaches, offering a scalable paradigm for enhancing pre-training efficiency and model capability.
Problem

Research questions and friction points this paper is trying to address.

Evaluates data quality across multiple dimensions (Professionalism, Readability, Reasoning, Cleanliness).
Improves pre-training efficiency and model performance via multi-dimensional data selection.
Addresses limitations of single-dimensional evaluation in current data selection methods.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multi-dimensional data selection with PRRC metrics
Proxy models predict optimal quality score combinations
Releases annotated SlimPajama-627B dataset for research
🔎 Similar Papers
No similar papers found.
X
Xinlin Zhuang
Shanghai Artificial Intelligence Laboratory, School of Computer Science and Technology, East China Normal University
J
Jiahui Peng
Shanghai Artificial Intelligence Laboratory
Ren Ma
Ren Ma
Shanghai AI Lab
LLM pretrainingRLHFNLP
Yinfan Wang
Yinfan Wang
Engineer, PJLAB
Tianyi Bai
Tianyi Bai
Hong Kong University of Science and Technology(HKUST)
Large Language Models
Xingjian Wei
Xingjian Wei
shanghai AI lab
data-centric-aiLLMVLMEngineer
Jiantao Qiu
Jiantao Qiu
EE department of Tsinghua University
C
Chi Zhang
Shanghai Artificial Intelligence Laboratory
Y
Ying Qian
School of Computer Science and Technology, East China Normal University
Conghui He
Conghui He
Shanghai AI Laboratory
Data-centric AILLMDocument Intelligence