🤖 AI Summary
This work investigates the trade-off between data filtering and contamination in multilingual (English/French) generative model training, and its implications for benchmark reliability and generation performance. We propose a “filtering-contamination” co-design framework: (1) fine-grained data curation via neural quality classification, revealing how aggressive filtering risks benchmark leakage; and (2) controlled, benign data poisoning to mitigate performance degradation while preserving safety. Leveraging this framework, we release Gaperon—a fully open, bilingual (EN/FR)-code model series (1.5B–24B parameters)—with complete training trajectories, intermediate checkpoints, and multi-stage training recipes. Experiments demonstrate fluent, coherent generation; critically, post-training contamination restores competitive benchmark scores without compromising safety. The framework enhances training transparency, reproducibility, and support for open research in multilingual code modeling.
📝 Abstract
We release Gaperon, a fully open suite of French-English-coding language models designed to advance transparency and reproducibility in large-scale model training. The Gaperon family includes 1.5B, 8B, and 24B parameter models trained on 2-4 trillion tokens, released with all elements of the training pipeline: French and English datasets filtered with a neural quality classifier, an efficient data curation and training framework, and hundreds of intermediate checkpoints. Through this work, we study how data filtering and contamination interact to shape both benchmark and generative performance. We find that filtering for linguistic quality enhances text fluency and coherence but yields subpar benchmark results, and that late deliberate contamination -- continuing training on data mixes that include test sets -- recovers competitive scores while only reasonably harming generation quality. We discuss how usual neural filtering can unintentionally amplify benchmark leakage. To support further research, we also introduce harmless data poisoning during pretraining, providing a realistic testbed for safety studies. By openly releasing all models, datasets, code, and checkpoints, Gaperon establishes a reproducible foundation for exploring the trade-offs between data curation, evaluation, safety, and openness in multilingual language model development.