FineWeb2: One Pipeline to Scale Them All -- Adapting Pre-Training Data Processing to Every Language

📅 2025-06-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the challenges of poor generalizability in preprocessing pipelines and uneven data quality in multilingual large language model (LLM) training, this work introduces the first scalable, automated multilingual pretraining data processing framework—supporting up to one thousand languages. Built upon Common Crawl, the framework integrates language-aware efficient filtering, cross-lingual deduplication, and joint optimization of duplication rate and quality for data rebalancing. It employs end-to-end evaluation via ablation studies guided by multilingual downstream tasks. We release FineWeb2, a 5-billion-document, 20-TB multilingual dataset covering nine languages. Experiments demonstrate substantial improvements in non-English LLM performance across multiple benchmarks. This work establishes a systematic, reproducible infrastructure for high-quality multilingual foundation model training.

Technology Category

Application Category

📝 Abstract
Pre-training state-of-the-art large language models (LLMs) requires vast amounts of clean and diverse text data. While the open development of large high-quality English pre-training datasets has seen substantial recent progress, training performant multilingual LLMs remains a challenge, in large part due to the inherent difficulty of tailoring filtering and deduplication pipelines to a large number of languages. In this work, we introduce a new pre-training dataset curation pipeline based on FineWeb that can be automatically adapted to support any language. We extensively ablate our pipeline design choices on a set of nine diverse languages, guided by a set of meaningful and informative evaluation tasks that were chosen through a novel selection process based on measurable criteria. Ultimately, we show that our pipeline can be used to create non-English corpora that produce more performant models than prior datasets. We additionally introduce a straightforward and principled approach to rebalance datasets that takes into consideration both duplication count and quality, providing an additional performance uplift. Finally, we scale our pipeline to over 1000 languages using almost 100 Common Crawl snapshots to produce FineWeb2, a new 20 terabyte (5 billion document) multilingual dataset which we release along with our pipeline, training, and evaluation codebases.
Problem

Research questions and friction points this paper is trying to address.

Adapting pre-training data processing for multilingual LLMs
Creating clean diverse datasets for non-English languages
Rebalancing datasets considering duplication and quality
Innovation

Methods, ideas, or system contributions that make the work stand out.

Automated multilingual pre-training data pipeline
Rebalancing datasets by duplication and quality
Scaled pipeline for 1000+ languages
🔎 Similar Papers
No similar papers found.