DataRater: Meta-Learned Dataset Curation

πŸ“… 2025-05-23
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Foundation model performance is highly sensitive to training data quality, yet existing data filtering methods rely heavily on hand-crafted rules or heuristic strategies, suffering from poor scalability and computational inefficiency. Method: We propose a learnable data valuation paradigm, introducing for the first time a meta-gradient-driven fine-grained scoring mechanism. By leveraging meta-learning, we model each sample’s marginal contribution to model generalization, and perform end-to-end data value estimation via held-out validation objectives. Contribution/Results: Our approach eliminates reliance on manual rule design, offering inherent adaptability and scalability. Extensive experiments across multiple model scales and datasets demonstrate that, at equivalent model performance, it reduces training FLOPs by up to 40%, significantly improving training computational efficiency.

Technology Category

Application Category

πŸ“ Abstract
The quality of foundation models depends heavily on their training data. Consequently, great efforts have been put into dataset curation. Yet most approaches rely on manual tuning of coarse-grained mixtures of large buckets of data, or filtering by hand-crafted heuristics. An approach that is ultimately more scalable (let alone more satisfying) is to emph{learn} which data is actually valuable for training. This type of meta-learning could allow more sophisticated, fine-grained, and effective curation. Our proposed emph{DataRater} is an instance of this idea. It estimates the value of training on any particular data point. This is done by meta-learning using `meta-gradients', with the objective of improving training efficiency on held out data. In extensive experiments across a range of model scales and datasets, we find that using our DataRater to filter data is highly effective, resulting in significantly improved compute efficiency.
Problem

Research questions and friction points this paper is trying to address.

Learning which data is valuable for training foundation models
Meta-learning to improve dataset curation efficiency
Estimating training value of individual data points via meta-gradients
Innovation

Methods, ideas, or system contributions that make the work stand out.

Meta-learned dataset curation using meta-gradients
Estimates value of each data point
Improves compute efficiency significantly
πŸ”Ž Similar Papers
No similar papers found.
D
D. A. Calian
Google DeepMind
Gregory Farquhar
Gregory Farquhar
DeepMind
Reinforcement LearningArtificial Intelligence
I
Iurii Kemaev
Google DeepMind
L
Luisa M. Zintgraf
Google DeepMind
Matteo Hessel
Matteo Hessel
Research Engineer, Google DeepMind
Artificial IntelligenceMachine LearningComputer Science
Junhyuk Oh
Junhyuk Oh
Research Scientist, DeepMind
Artificial IntelligenceMachine LearningDeep LearningReinforcement Learning
A
Andr'as Gyorgy
Google DeepMind
T
T. Schaul
Google DeepMind
J
Jeffrey Dean
Google DeepMind
H
H. V. Hasselt
Google DeepMind
D
David Silver
Google DeepMind