CLIMB: CLustering-based Iterative Data Mixture Bootstrapping for Language Model Pre-training

📅 2025-04-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Pretraining corpora (e.g., Common Crawl) often lack explicit domain labels, hindering principled data mixture design. This paper introduces Clime, an automated framework for discovering optimal data mixtures via semantic clustering and iterative proxy evaluation. Its core innovation is the “clustering-driven iterative bootstrapping paradigm,” which jointly leverages BERT-style semantic embeddings, hierarchical clustering, lightweight proxy model distillation, and mixture proportion prediction to enable unsupervised domain structure discovery and dynamic token reweighting. We release two open-source datasets: ClimeLab (1.2T tokens, 20 clusters) and ClimbMix (400B tokens). Evaluated on a 1B-parameter model, Clime achieves a +2.0% average gain over Llama-3.2-1B, up to +5% improvement on social sciences, and consistently outperforms baselines under identical token budgets. This work provides the first systematic characterization of structural regularities governing optimal data mixing.

Technology Category

Application Category

📝 Abstract
Pre-training datasets are typically collected from web content and lack inherent domain divisions. For instance, widely used datasets like Common Crawl do not include explicit domain labels, while manually curating labeled datasets such as The Pile is labor-intensive. Consequently, identifying an optimal pre-training data mixture remains a challenging problem, despite its significant benefits for pre-training performance. To address these challenges, we propose CLustering-based Iterative Data Mixture Bootstrapping (CLIMB), an automated framework that discovers, evaluates, and refines data mixtures in a pre-training setting. Specifically, CLIMB embeds and clusters large-scale datasets in a semantic space and then iteratively searches for optimal mixtures using a smaller proxy model and a predictor. When continuously trained on 400B tokens with this mixture, our 1B model exceeds the state-of-the-art Llama-3.2-1B by 2.0%. Moreover, we observe that optimizing for a specific domain (e.g., Social Sciences) yields a 5% improvement over random sampling. Finally, we introduce ClimbLab, a filtered 1.2-trillion-token corpus with 20 clusters as a research playground, and ClimbMix, a compact yet powerful 400-billion-token dataset designed for efficient pre-training that delivers superior performance under an equal token budget. We analyze the final data mixture, elucidating the characteristics of an optimal data mixture. Our data is available at: https://research.nvidia.com/labs/lpr/climb/
Problem

Research questions and friction points this paper is trying to address.

Lack of domain divisions in pre-training datasets
Challenges in identifying optimal data mixtures
Need for automated framework to refine data mixtures
Innovation

Methods, ideas, or system contributions that make the work stand out.

Automated clustering-based data mixture framework
Iterative semantic search with proxy model
Optimized domain-specific pre-training datasets
🔎 Similar Papers
No similar papers found.