π€ AI Summary
To address intellectual property and ethical risks arising from the widespread use of unlicensed text in large language model (LLM) training, this work introduces and open-sources Common Pile v0.1βthe first 8TB high-quality, fully copyright-verified corpus licensed for public use, spanning 30 source categories including academic papers, source code, books, and encyclopedias. Methodologically, we integrate multi-source compliant web crawling, automated license identification with human verification, rigorous deduplication, and quality filtering. We further release reproducible training recipes and pre-trained model checkpoints. Leveraging this dataset, the Comma series 7B models (v0.1-1T/2T) achieve performance on par with Llama 1/2 7B under equivalent compute budgets, attaining competitive results across mainstream benchmarks. This work establishes a foundational, legally sound, controllable, and reproducible infrastructure for responsible LLM pretraining.
π Abstract
Large language models (LLMs) are typically trained on enormous quantities of unlicensed text, a practice that has led to scrutiny due to possible intellectual property infringement and ethical concerns. Training LLMs on openly licensed text presents a first step towards addressing these issues, but prior data collection efforts have yielded datasets too small or low-quality to produce performant LLMs. To address this gap, we collect, curate, and release the Common Pile v0.1, an eight terabyte collection of openly licensed text designed for LLM pretraining. The Common Pile comprises content from 30 sources that span diverse domains including research papers, code, books, encyclopedias, educational materials, audio transcripts, and more. Crucially, we validate our efforts by training two 7 billion parameter LLMs on text from the Common Pile: Comma v0.1-1T and Comma v0.1-2T, trained on 1 and 2 trillion tokens respectively. Both models attain competitive performance to LLMs trained on unlicensed text with similar computational budgets, such as Llama 1 and 2 7B. In addition to releasing the Common Pile v0.1 itself, we also release the code used in its creation as well as the training mixture and checkpoints for the Comma v0.1 models.