🤖 AI Summary
This work addresses the scalability limitations of reinforcement learning for code generation imposed by the scarcity and high cost of human-annotated test cases. It proposes the first fully unsupervised code–test co-evolution framework, which jointly trains a code generator and a test generator, leveraging execution feedback from self-generated samples to construct dynamic supervision signals. Key innovations include a dynamic Bayesian selector (DyB4) to mitigate selector drift, integrated with rank-based pre-filtering, curriculum learning, and a mutation-driven mechanism to enhance test discriminability. Experimental results demonstrate that, without any ground-truth labels, the approach improves code generation performance by up to 21.6%—with DyB4 contributing significantly—and boosts test generation performance by 24.3%, approaching the performance of supervised oracle methods.
📝 Abstract
Code generation is important in software engineering, and Reinforcement Learning with Verifiable Rewards (RLVR) is a powerful paradigm to improve it through execution-based feedback. However, most RLVR pipelines rely on human-curated tests, making progress bottlenecked by scarce and costly supervision. Existing work tried to use self-generated tests to ground rewards, but the lack of discriminative tests constrains the effect due to the sub-optimal performance of the model on test generation. We aim to improve code generation without ground-truth supervision by co-evolving code and test generation, so that their interactions yield progressively more informative supervision. To this end, we present ZeroCoder, a fully label-free co-evolutionary framework that jointly trains a Coder and a Tester using execution feedback from self-generated code-test interactions. For each problem, ZeroCoder executes sampled solutions against sampled tests to form a passing matrix, identifies a consensus subset of likely-correct solutions and consistent tests via a pluggable selection algorithm, and derives role-specific rewards. To ensure reward quality, ZeroCoder filters low-information instances via rank-based pre-filtering and trains the Tester with a curriculum balancing validity and mutation-driven discriminativeness. We further identify selector drift, the progressive miscalibration of fixed selection rules during co-evolution, and introduce DyB4, a Bayesian selector that uses as few as 10 labeled instances to recalibrate its priors dynamically. Across three models and six benchmarks, ZeroCoder consistently improves code generation and test generation. In the fully label-free setting, it improves code generation by up to 14.5% over the base model on Qwen2.5-Coder-7B-Instruct. With DyB4, the gain reaches 21.6%, while test generation improves by 24.3%, approaching oracle-supervised performance.