🤖 AI Summary
High-quality, reproducible open-source large language models (LLMs) for code remain critically scarce, hindered by low data transparency, opaque training protocols, and suboptimal synthetic data quality. To address this, we introduce OpenCoder—the first high-performance open-source code LLM built upon the “open recipe” paradigm. Our methodology centers on three empirically validated pillars: (1) code-specific data cleaning and deduplication, (2) high-recall retrieval of code-relevant corpora, and (3) multi-stage generation of high-fidelity synthetic data via annealed knowledge distillation and supervised fine-tuning. We fully open-source the model weights, end-to-end data processing pipeline, training protocols, and ablation studies. OpenCoder achieves state-of-the-art performance among open-source models on HumanEval, MBPP, and APPS benchmarks. It has already enabled reproducible validation and advancement in multiple code reasoning and code-agent research initiatives.
📝 Abstract
Large language models (LLMs) for code have become indispensable in various domains, including code generation, reasoning tasks and agent systems. While open-access code LLMs are increasingly approaching the performance levels of proprietary models, high-quality code LLMs suitable for rigorous scientific investigation, particularly those with reproducible data processing pipelines and transparent training protocols, remain limited. The scarcity is due to various challenges, including resource constraints, ethical considerations, and the competitive advantages of keeping models advanced. To address the gap, we introduce OpenCoder, a top-tier code LLM that not only achieves performance comparable to leading models but also serves as an"open cookbook"for the research community. Unlike most prior efforts, we release not only model weights and inference code, but also the reproducible training data, complete data processing pipeline, rigorous experimental ablation results, and detailed training protocols for open scientific research. Through this comprehensive release, we identify the key ingredients for building a top-tier code LLM: (1) code optimized heuristic rules for data cleaning and methods for data deduplication, (2) recall of text corpus related to code and (3) high-quality synthetic data in both annealing and supervised fine-tuning stages. By offering this level of openness, we aim to broaden access to all aspects of a top-tier code LLM, with OpenCoder serving as both a powerful model and an open foundation to accelerate research, and enable reproducible advancements in code AI.