Scaling Data Difficulty: Improving Coding Models via Reinforcement Learning on Fresh and Challenging Problems

📅 2026-03-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing code generation datasets often suffer from imbalanced difficulty levels, inconsistent formatting, and variable quality, which hinder model performance on complex programming tasks. This work proposes a difficulty-aware data curation paradigm, implementing a four-stage pipeline—collection, cleaning, filtering, and validation—and introduces an LLM-based predict-calibrate-select mechanism to automatically identify high-difficulty, novel competitive programming problems based on five weighted dimensions, yielding the high-quality MicroCoder dataset. When combined with reinforcement learning algorithms such as GRPO, the approach achieves a threefold improvement in training efficiency within 300 steps on LiveCodeBench compared to baseline methods, with an overall performance gain of up to 17.2%, particularly demonstrating substantial gains on medium- and high-difficulty tasks.

Technology Category

Application Category

📝 Abstract
Training next-generation code generation models requires high-quality datasets, yet existing datasets face difficulty imbalance, format inconsistency, and data quality problems. We address these challenges through systematic data processing and difficulty scaling. We introduce a four-stage Data Processing Framework encompassing collection, processing, filtering, and verification, incorporating Automatic Difficulty Filtering via an LLM-based predict-calibrate-select framework that leverages multi-dimensional difficulty metrics across five weighted dimensions to retain challenging problems while removing simplistic ones. The resulting MicroCoder dataset comprises tens of thousands of curated real competitive programming problems from diverse platforms, emphasizing recency and difficulty. Evaluations on strictly unseen LiveCodeBench demonstrate that MicroCoder achieves 3x larger performance gains within 300 training steps compared to widely-used baseline datasets of comparable size, with consistent advantages under both GRPO and its variant training algorithms. The MicroCoder dataset delivers obvious improvements on medium and hard problems across different model sizes, achieving up to 17.2% relative gains in overall performance where model capabilities are most stretched. These results validate that difficulty-aware data curation improves model performance on challenging tasks, providing multiple insights for dataset creation in code generation.
Problem

Research questions and friction points this paper is trying to address.

data difficulty
code generation
dataset quality
difficulty imbalance
competitive programming
Innovation

Methods, ideas, or system contributions that make the work stand out.

difficulty-aware data curation
reinforcement learning
code generation
automatic difficulty filtering
competitive programming dataset
🔎 Similar Papers
No similar papers found.