AraMix: Recycling, Refiltering, and Deduplicating to Deliver the Largest Arabic Pretraining Corpus

📅 2025-12-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Arabic pretraining corpora suffer from severe redundancy (nearly 60% token-level duplication) and heterogeneous quality. To address this, we propose a “data reuse over new crawling” paradigm, systematically integrating seven existing public Arabic web datasets. Our pipeline applies Arabic-specific quality filtering, MinHash-based deduplication at both document and sentence levels, multi-source fusion, and metadata alignment. The resulting corpus—currently the largest publicly available, deeply deduplicated Arabic dataset—comprises 178 billion tokens across 179 million documents. Empirical evaluation demonstrates substantial improvements in downstream model training efficiency and generalization performance. This corpus has become the de facto standard training data for multiple open-source Arabic large language models, establishing a new principle in Arabic NLP: rigorous, quality-driven data curation takes precedence over mere scale expansion.

Technology Category

Application Category

📝 Abstract
We present AraMix, a deduplicated Arabic pretraining corpus containing approximately 178 billion tokens across 179 million documents. Rather than scraping the web again, AraMix demonstrates that substantial value lies in systematically reusing and curating existing pretraining datasets: we combine seven publicly available Arabic web datasets, apply quality filtering designed specifically for Arabic text to re-filter some datasets, and perform cross-dataset deduplication, both MinHash and sentence-level. This approach reveals that nearly 60% of tokens across these independently collected corpora are duplicates, redundancy that any new scraping efforts will reproduce. Our work suggests that for lower resource languages, investment in curation pipelines for existing data yields greater returns than additional web crawls, an approach that allowed us to curate the largest heavily filtered publicly available Arabic pretraining corpus.
Problem

Research questions and friction points this paper is trying to address.

Constructing a large, high-quality Arabic pretraining corpus
Reducing redundancy and duplicates in existing Arabic datasets
Optimizing data curation over new web scraping for low-resource languages
Innovation

Methods, ideas, or system contributions that make the work stand out.

Reusing existing datasets for Arabic pretraining
Applying Arabic-specific quality filtering techniques
Performing cross-dataset deduplication to remove redundancy
🔎 Similar Papers
No similar papers found.
S
Sultan Alrashed
King Abdullah University of Science and Technology (KAUST)
Francesco Orabona
Francesco Orabona
Associate Professor, KAUST
Online LearningMachine LearningOptimizationLearning Theory