EvolveCoder: Evolving Test Cases via Adversarial Verification for Code Reinforcement Learning

πŸ“… 2026-03-13
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Existing reinforcement learning datasets for code generation suffer from weak and static validation signals, limiting their effectiveness in enhancing large language models’ coding capabilities. This work proposes a deconditioned adversarial validation framework that iteratively evolves test cases over multiple rounds to dynamically increase their difficulty and discriminative power while reducing redundancy, thereby constructing a high-quality reinforcement learning dataset. The approach introduces, for the first time, a deconditioned adversarial mechanism that substantially improves the quality of validation signals. The resulting EvolveCoder-22k dataset yields an average improvement of 4.2 points for Qwen3-4B across four downstream benchmarks, while simultaneously decreasing pass@1 from 43.80% to 31.22%, demonstrating a significant enhancement in validation signal efficacy.

Technology Category

Application Category

πŸ“ Abstract
Reinforcement learning with verifiable rewards (RLVR) is a promising approach for improving code generation in large language models, but its effectiveness is limited by weak and static verification signals in existing coding RL datasets. In this paper, we propose a solution-conditioned and adversarial verification framework that iteratively refines test cases based on the execution behaviors of candidate solutions, with the goal of increasing difficulty, improving discriminative power, and reducing redundancy. Based on this framework, we introduce EvolveCoder-22k, a large-scale coding reinforcement learning dataset constructed through multiple rounds of adversarial test case evolution. Empirical analysis shows that iterative refinement substantially strengthens verification, with pass@1 decreasing from 43.80 to 31.22. Reinforcement learning on EvolveCoder-22k yields stable optimization and consistent performance gains, improving Qwen3-4B by an average of 4.2 points across four downstream benchmarks and outperforming strong 4B-scale baselines. Our results highlight the importance of adversarial, solution-conditioned verification for effective and scalable reinforcement learning in code generation.
Problem

Research questions and friction points this paper is trying to address.

code generation
reinforcement learning
verification signals
test cases
large language models
Innovation

Methods, ideas, or system contributions that make the work stand out.

adversarial verification
solution-conditioned test evolution
reinforcement learning for code generation
iterative test refinement
EvolveCoder-22k
πŸ”Ž Similar Papers
No similar papers found.