AP2O: Correcting LLM-Generated Code Errors Type by Type Like Humans via Adaptive Progressive Preference Optimization

๐Ÿ“… 2025-09-30
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Existing offline preference optimization methods rely solely on binary pass/fail signals from code execution, ignoring fine-grained semantic error information embedded in failure casesโ€”thereby limiting the reliability of LLM-generated code. To address this, we propose a human-inspired, progressive error-correction framework: first, we construct an *error notebook* to explicitly model compilation- and runtime-error categories; second, we design an *adaptive replay mechanism* that dynamically prioritizes high-frequency or hard-to-resolve error types, enabling fine-grained, stage-wise preference optimization. Our approach operates within a unified offline preference learning paradigm and demonstrates consistent effectiveness across diverse open-weight models (0.5Bโ€“34B), including Llama, Qwen, and DeepSeek. With significantly fewer preference samples, it achieves up to a 3% absolute improvement in pass@k, markedly enhancing both correctness and robustness of generated code.

Technology Category

Application Category

๐Ÿ“ Abstract
LLMs' code generation capabilities have yielded substantial improvements in the effectiveness of programming tasks. However, LLM-generated code still suffers from compilation and runtime errors. Existing offline preference optimization methods primarily focus on enhancing LLMs' coding abilities using pass/fail signals in the preference data, overlooking the deep-level error types in the failed codes. To address this, we propose Adaptively Progressive Preference Optimization (AP2O) for coding (i.e., AP2O-Coder), a method that guides LLMs adaptively and methodically to reduce code errors for code generation. Specifically, we construct an error notebook from failed codes and progressively optimize the LLM to correct errors type by type. Furthermore, we adaptively replay error types to tailor to the LLM's changing weaknesses throughout the training process. Through extensive experiments on both code and general LLMs (Llama, Qwen, and DeepSeek series) with parameters ranging from 0.5B to 34B, our AP2O-Coder improves code generation performance by up to 3% in pass@k while using less preference data. Code: https://github.com/TsingZ0/AP2O
Problem

Research questions and friction points this paper is trying to address.

Correcting compilation and runtime errors in LLM-generated code
Addressing overlooked deep-level error types in failed codes
Adaptively optimizing LLMs to reduce specific code error types
Innovation

Methods, ideas, or system contributions that make the work stand out.

Adaptively Progressive Preference Optimization for code correction
Constructs error notebook from failed codes for training
Replays error types to address LLM's changing weaknesses
๐Ÿ”Ž Similar Papers
No similar papers found.
J
Jianqing Zhang
Shanghai Jiao Tong University
W
Wei Xia
Tencent
Hande Dong
Hande Dong
Tencent
machine learningdata miningNLP
Qiang Lin
Qiang Lin
University of Rochester
Nonlinear PhotonicsQuantum PhotonicsMechanical Photonics
J
Jian Cao
Shanghai Jiao Tong University