Gaperon: A Peppered English-French Generative Language Model Suite

📅 2025-10-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work investigates the trade-off between data filtering and contamination in multilingual (English/French) generative model training, and its implications for benchmark reliability and generation performance. We propose a “filtering-contamination” co-design framework: (1) fine-grained data curation via neural quality classification, revealing how aggressive filtering risks benchmark leakage; and (2) controlled, benign data poisoning to mitigate performance degradation while preserving safety. Leveraging this framework, we release Gaperon—a fully open, bilingual (EN/FR)-code model series (1.5B–24B parameters)—with complete training trajectories, intermediate checkpoints, and multi-stage training recipes. Experiments demonstrate fluent, coherent generation; critically, post-training contamination restores competitive benchmark scores without compromising safety. The framework enhances training transparency, reproducibility, and support for open research in multilingual code modeling.

Technology Category

Application Category

📝 Abstract
We release Gaperon, a fully open suite of French-English-coding language models designed to advance transparency and reproducibility in large-scale model training. The Gaperon family includes 1.5B, 8B, and 24B parameter models trained on 2-4 trillion tokens, released with all elements of the training pipeline: French and English datasets filtered with a neural quality classifier, an efficient data curation and training framework, and hundreds of intermediate checkpoints. Through this work, we study how data filtering and contamination interact to shape both benchmark and generative performance. We find that filtering for linguistic quality enhances text fluency and coherence but yields subpar benchmark results, and that late deliberate contamination -- continuing training on data mixes that include test sets -- recovers competitive scores while only reasonably harming generation quality. We discuss how usual neural filtering can unintentionally amplify benchmark leakage. To support further research, we also introduce harmless data poisoning during pretraining, providing a realistic testbed for safety studies. By openly releasing all models, datasets, code, and checkpoints, Gaperon establishes a reproducible foundation for exploring the trade-offs between data curation, evaluation, safety, and openness in multilingual language model development.
Problem

Research questions and friction points this paper is trying to address.

Studying how data filtering affects benchmark and generative performance
Investigating how contamination interacts with data filtering techniques
Exploring trade-offs between data curation, evaluation, and model safety
Innovation

Methods, ideas, or system contributions that make the work stand out.

Open multilingual model suite with full transparency
Data filtering and contamination for performance balance
Harmless data poisoning for safety research
🔎 Similar Papers
No similar papers found.