daVinci-LLM:Towards the Science of Pretraining

📅 2026-03-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the long-standing gap in systematic research on pretraining—a phase that fundamentally determines a model’s capability ceiling—hampered by industrial opacity and academic compute constraints. Leveraging industrial-scale computational resources and full scientific autonomy, we propose Data Darwinism, a novel framework featuring an L0–L9 taxonomy of data processing depth, establishing data curation rigor as a critical dimension alongside data volume. Through two-stage adaptive curriculum learning, we train a 3B-parameter model on 8 trillion tokens and conduct over 200 ablation studies. Our findings reveal domain-specific saturation dynamics and a combinatorial balancing mechanism, demonstrating that principled data processing substantially enhances model performance while preventing catastrophic degradation. The entire pipeline is open-sourced to foster cumulative progress in the science of pretraining.
📝 Abstract
The foundational pretraining phase determines a model's capability ceiling, as post-training struggles to overcome capability foundations established during pretraining, yet it remains critically under-explored. This stems from a structural paradox: organizations with computational resources operate under commercial pressures that inhibit transparent disclosure, while academic institutions possess research freedom but lack pretraining-scale computational resources. daVinci-LLM occupies this unexplored intersection, combining industrial-scale resources with full research freedom to advance the science of pretraining. We adopt a fully-open paradigm that treats openness as scientific methodology, releasing complete data processing pipelines, full training processes, and systematic exploration results. Recognizing that the field lacks systematic methodology for data processing, we employ the Data Darwinism framework, a principled L0-L9 taxonomy from filtering to synthesis. We train a 3B-parameter model from random initialization across 8T tokens using a two-stage adaptive curriculum that progressively shifts from foundational capabilities to reasoning-intensive enhancement. Through 200+ controlled ablations, we establish that: processing depth systematically enhances capabilities, establishing it as a critical dimension alongside volume scaling; different domains exhibit distinct saturation dynamics, necessitating adaptive strategies from proportion adjustments to format shifts; compositional balance enables targeted intensification while preventing performance collapse; how evaluation protocol choices shape our understanding of pretraining progress. By releasing the complete exploration process, we enable the community to build upon our findings and systematic methodologies to form accumulative scientific knowledge in pretraining.
Problem

Research questions and friction points this paper is trying to address.

pretraining
large language models
data processing
systematic methodology
scientific transparency
Innovation

Methods, ideas, or system contributions that make the work stand out.

pretraining science
Data Darwinism
adaptive curriculum
processing depth
open research paradigm
🔎 Similar Papers
No similar papers found.