π€ AI Summary
Foundation model performance is highly sensitive to training data quality, yet existing data filtering methods rely heavily on hand-crafted rules or heuristic strategies, suffering from poor scalability and computational inefficiency.
Method: We propose a learnable data valuation paradigm, introducing for the first time a meta-gradient-driven fine-grained scoring mechanism. By leveraging meta-learning, we model each sampleβs marginal contribution to model generalization, and perform end-to-end data value estimation via held-out validation objectives.
Contribution/Results: Our approach eliminates reliance on manual rule design, offering inherent adaptability and scalability. Extensive experiments across multiple model scales and datasets demonstrate that, at equivalent model performance, it reduces training FLOPs by up to 40%, significantly improving training computational efficiency.
π Abstract
The quality of foundation models depends heavily on their training data. Consequently, great efforts have been put into dataset curation. Yet most approaches rely on manual tuning of coarse-grained mixtures of large buckets of data, or filtering by hand-crafted heuristics. An approach that is ultimately more scalable (let alone more satisfying) is to emph{learn} which data is actually valuable for training. This type of meta-learning could allow more sophisticated, fine-grained, and effective curation. Our proposed emph{DataRater} is an instance of this idea. It estimates the value of training on any particular data point. This is done by meta-learning using `meta-gradients', with the objective of improving training efficiency on held out data. In extensive experiments across a range of model scales and datasets, we find that using our DataRater to filter data is highly effective, resulting in significantly improved compute efficiency.