🤖 AI Summary
This work addresses the limited ability of large language models to effectively repair implementation errors in competitive-level code generation, primarily due to the absence of test feedback and iterative refinement mechanisms for candidate solutions. To bridge this gap, the authors propose FixAudit, a novel framework that introduces, for the first time, a failure-test-case-guided repair strategy coupled with a dynamic test generation mechanism capable of reading and analyzing candidate code, thereby tightly integrating testing and repair. FixAudit employs a shared large model architecture trained through a four-stage process that unifies execution feedback, dynamic test synthesis, and program repair into a closed-loop debugging pipeline. Experimental results demonstrate that the 7B variant of FixAudit surpasses the zero-shot performance of its 32B counterpart across APPS, CodeContests, and xCodeEval benchmarks, achieving absolute Pass@1 improvements of 35.1%–36.8% and AvgPassRatio gains of 7.1%–24.5%.
📝 Abstract
Large language models (LLMs) have made remarkable progress in code generation, but competitive programming remains a challenge. Recent training-based methods have improved code generation by using reinforcement learning (RL) with execution feedback. The more recent framework CURE further incorporates test generation into the training process, jointly training a Coder and a Tester within a single model. At inference time, the Coder generates many candidate programs, and the Tester generates tests from the problem description. The candidate who passes the most of the generated tests is selected as the final answer. However, CURE has two critical limitations. First, the Tester never reads any candidate code, so its tests often fail to expose implementation-specific bugs. Second, the Coder generates every candidate from scratch and never learns to fix a buggy program based on a failed test. To address these limitations, we propose FixAudit, which approaches competitive code generation from a new perspective: starting from a single initial candidate, it iteratively improves the candidate through a targeted test-and-repair debugging cycle. The framework trains one shared model with two specialized roles through four stages: the Fixer, which repairs the current candidate based on a failing test, and the Auditor, which reads the candidate code to generate new tests that expose its remaining bugs. We evaluate FixAudit on three benchmarks: APPS, CodeContests, and xCodeEval. Applied to a 7B model, the framework surpasses the average performance of the larger 32B baseline within the same model family under the zero-shot setting. Compared to strong baselines built on the same 7B base model, FixAudit improves average Pass@1 by 35.1% to 36.8% and average AvgPassRatio by 7.1% to 24.5%.