DataComp-LM: In search of the next generation of training sets for language models

📅 2024-06-17
🏛️ Neural Information Processing Systems
📈 Citations: 72
Influential: 7
📄 PDF
🤖 AI Summary
This study addresses the inefficiency of dataset design in language model pretraining by proposing the DataComp for Language Models (DCLM) benchmark. Methodologically, it constructs a standardized 240-TB Common Crawl corpus, integrates model-driven data filtering, deduplication, and mixing strategies, and establishes a scalable pretraining pipeline alongside a comprehensive 53-task downstream evaluation suite. Its key contribution is the first systematic empirical validation that model-driven filtering critically enhances data quality; it also releases the DCLM-Baseline dataset. Experiments show that a 7B model trained on DCLM achieves 64.0% accuracy on MMLU (5-shot), outperforming MAP-Neo by 6.6 percentage points while reducing computational cost by 40%. Its performance matches that of Mistral-7B-v0.3 and Llama-3-8B, yet its training cost is only ~1/6.6 of theirs—demonstrating substantial gains in data efficiency.

Technology Category

Application Category

📝 Abstract
We introduce DataComp for Language Models (DCLM), a testbed for controlled dataset experiments with the goal of improving language models. As part of DCLM, we provide a standardized corpus of 240T tokens extracted from Common Crawl, effective pretraining recipes based on the OpenLM framework, and a broad suite of 53 downstream evaluations. Participants in the DCLM benchmark can experiment with data curation strategies such as deduplication, filtering, and data mixing at model scales ranging from 412M to 7B parameters. As a baseline for DCLM, we conduct extensive experiments and find that model-based filtering is key to assembling a high-quality training set. The resulting dataset, DCLM-Baseline enables training a 7B parameter language model from scratch to 64% 5-shot accuracy on MMLU with 2.6T training tokens. Compared to MAP-Neo, the previous state-of-the-art in open-data language models, DCLM-Baseline represents a 6.6 percentage point improvement on MMLU while being trained with 40% less compute. Our baseline model is also comparable to Mistral-7B-v0.3 and Llama 3 8B on MMLU (63%&66%), and performs similarly on an average of 53 natural language understanding tasks while being trained with 6.6x less compute than Llama 3 8B. Our results highlight the importance of dataset design for training language models and offer a starting point for further research on data curation.
Problem

Research questions and friction points this paper is trying to address.

Improving language models through controlled dataset experiments
Standardizing data curation strategies for better training sets
Reducing compute costs while enhancing model performance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Standardized corpus of 240T tokens
Model-based filtering for quality data
Effective pretraining with OpenLM framework
🔎 Similar Papers
No similar papers found.
Jeffrey Li
Jeffrey Li
University of Washington
Machine Learning
A
Alex Fang
University of Washington
Georgios Smyrnis
Georgios Smyrnis
University of Texas at Austin
Machine Learning
Maor Ivgi
Maor Ivgi
Tel Aviv University
Matt Jordan
Matt Jordan
Graduate Research Assistant, UT Austin
Adversarial Examples
S
Samir Gadre
Toyota Research Institute
Hritik Bansal
Hritik Bansal
University of California Los Angeles | Indian Institute of Technology Delhi
Multimodal LearningLanguage Modeling
E
Etash Guha
University of Washington
S
Sedrick Keh
Toyota Research Institute
K
Kushal Arora
Toyota Research Institute
S
Saurabh Garg
CMU
R
Rui Xin
University of Washington
N
Niklas Muenninghoff
Contextual AI
Reinhard Heckel
Reinhard Heckel
Technical University of Munich and Rice University
Jean Mercat
Jean Mercat
Research scientist at Toyota Research Institute
Neural networks
Mayee Chen
Mayee Chen
Stanford University
Machine LearningComputer Science
S
Suchin Gururangan
University of Washington
Mitchell Wortsman
Mitchell Wortsman
University of Washington
Alon Albalak
Alon Albalak
Lila Sciences
Data-Centric AIMachine LearningOpen-Endedness
Yonatan Bitton
Yonatan Bitton
Research Scientist, Google
Vision-and-LanguageMultimodalText-to-ImageImage-Text AlignmentNLP
M
Marianna Nezhurina
JSC
Amro Abbas
Amro Abbas
DatologyAI
Machine LearningNatural Language ProcessingComputer Vision
Cheng-Yu Hsieh
Cheng-Yu Hsieh
Ph.D. student, University of Washington
Data-Centric AIEfficient Machine LearningModel Interpretability
D
Dhruba Ghosh
University of Washington
Josh Gardner
Josh Gardner
Anthropic
Machine LearningRobustnessMultimodalTabular DataMusic and Audio
Maciej Kilian
Maciej Kilian
Perceptron AI
H
Hanlin Zhang
Harvard
Rulin Shao
Rulin Shao
University of Washington
machine learning
S
Sarah Pratt
University of Washington
Sunny Sanyal
Sunny Sanyal
PhD ECE, University of Texas at Austin
Machine LearningLanguage Models
Gabriel Ilharco
Gabriel Ilharco
University of Washington
Giannis Daras
Giannis Daras
MIT
Generative ModelsInverse Problems
Kalyani Marathe
Kalyani Marathe
University of Washington
Aaron Gokaslan
Aaron Gokaslan
Cornell University
computer visiongraphicsdeep learningrobotics
Jieyu Zhang
Jieyu Zhang
University of Washington
Data-Centric AIAgentic AIMultimodal ModelsMachine LearningComputer Vision
K
Khyathi Chandu
University of Washington
T
Thao Nguyen
University of Washington
Igor Vasiljevic
Igor Vasiljevic
Toyota Research Institute (TRI)
machine learningcomputer visionnatural language processing
S
Sham Kakade
Harvard
Shuran Song
Shuran Song
Stanford University
RoboticsComputer VisionMachine Learning
Sujay Sanghavi
Sujay Sanghavi
Professor, Electrical and Computer Engineering, University of Texas, Austin
Machine LearningOptimization and AlgorithmsNetworks
Fartash Faghri
Fartash Faghri
Apple ML Research
Machine LearningComputer Vision
S
Sewoong Oh
University of Washington
Luke Zettlemoyer
Luke Zettlemoyer
University of Washington; Meta
Natural Language ProcessingSemanticsMachine LearningArtificial Intelligence
Kyle Lo
Kyle Lo
Allen Institute for AI
natural language processingmachine learninghuman computer interactionstatistics
Alaaeldin El-Nouby
Alaaeldin El-Nouby
FAIR, Meta
Computer VisionMachine Learning
Hadi Pouransari
Hadi Pouransari
Apple, Stanford University
Artificial IntelligenceComputational MathematicsHigh Performance Computing
Alexander Toshev
Alexander Toshev
Apple Inc
computer visionmachine learningembodied AIRobotics
S
Stephanie Wang
University of Washington
Dirk Groeneveld
Dirk Groeneveld
Allen Institute for Artificial Intelligence
natural language processingneural networksdeep learning
L
Luca Soldani
AI2
Pang Wei Koh
Pang Wei Koh
University of Washington; Allen Institute for AI
Machine learningNatural language processingComputational biology
Jenia Jitsev
Jenia Jitsev
Scalable Learning & Multi-Purpose AI (SLAMPAI) Lab, JSC, Forschungszentrum Juelich; ELLIS; LAION
Open Foundation Models & DatasetsScaling lawsPlasticity and Learning in Neural Networks
Thomas Kollar
Thomas Kollar
Wayve, Head of AI (Foundation Models)
languageroboticsfoundation models
A
Alexandros G. Dimakis
UT Austin, Bespokelabs.AI
Yair Carmon
Yair Carmon
Tel Aviv University
Machine LearningOptimizationStatistics
Achal Dave
Achal Dave
Toyota Research Institute
Ludwig Schmidt
Ludwig Schmidt
Stanford University and Anthropic
Machine LearningArtificial IntelligenceOptimizationAlgorithmsStatistics
Vaishaal Shankar
Vaishaal Shankar
Apple
Machine LearningML RobustnessML ReliabilityDeep Learning