CodeBoost: Boosting Code LLMs by Squeezing Knowledge from Code Snippets with RL

📅 2025-08-07
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current reinforcement learning (RL) post-training for code large language models heavily relies on costly, manually annotated instruction-response pairs, hindering scalability. To address this, we propose CodeBoost—a novel RL post-training framework that eliminates the need for human-written instructions and instead leverages only massive raw code corpora. Our key contributions include: (1) high-quality code filtering via maximum-clique-based selection; (2) bidirectional sequence prediction modeling; (3) execution-aware reward shaping that distinguishes syntactic and semantic errors; (4) joint discrimination of syntactic correctness and semantic validity; and (5) heterogeneous code data augmentation. Extensive evaluation across major open-source code models—including CodeLlama and StarCoder—and standard benchmarks—HumanEval, MBPP, and APPS—demonstrates consistent and significant performance gains. Results validate CodeBoost’s effectiveness, scalability, and strong generalization across diverse model architectures and coding tasks.

Technology Category

Application Category

📝 Abstract
Code large language models (LLMs) have become indispensable tools for building efficient and automated coding pipelines. Existing models are typically post-trained using reinforcement learning (RL) from general-purpose LLMs using "human instruction-final answer" pairs, where the instructions are usually from manual annotations. However, collecting high-quality coding instructions is both labor-intensive and difficult to scale. On the other hand, code snippets are abundantly available from various sources. This imbalance presents a major bottleneck in instruction-based post-training. We propose CodeBoost, a post-training framework that enhances code LLMs purely from code snippets, without relying on human-annotated instructions. CodeBoost introduces the following key components: (1) maximum-clique curation, which selects a representative and diverse training corpus from code; (2) bi-directional prediction, which enables the model to learn from both forward and backward prediction objectives; (3) error-aware prediction, which incorporates learning signals from both correct and incorrect outputs; (4) heterogeneous augmentation, which diversifies the training distribution to enrich code semantics; and (5) heterogeneous rewarding, which guides model learning through multiple reward types including format correctness and execution feedback from both successes and failures. Extensive experiments across several code LLMs and benchmarks verify that CodeBoost consistently improves performance, demonstrating its effectiveness as a scalable and effective training pipeline.
Problem

Research questions and friction points this paper is trying to address.

Enhancing code LLMs without human-annotated instructions
Overcoming laborintensive coding instruction collection bottleneck
Improving model performance using diverse code snippets
Innovation

Methods, ideas, or system contributions that make the work stand out.

Maximum-clique curation for diverse training corpus
Bi-directional prediction for enhanced learning
Heterogeneous rewarding with multiple feedback types
🔎 Similar Papers
No similar papers found.