๐ค AI Summary
Existing offline preference optimization methods rely solely on binary pass/fail signals from code execution, ignoring fine-grained semantic error information embedded in failure casesโthereby limiting the reliability of LLM-generated code. To address this, we propose a human-inspired, progressive error-correction framework: first, we construct an *error notebook* to explicitly model compilation- and runtime-error categories; second, we design an *adaptive replay mechanism* that dynamically prioritizes high-frequency or hard-to-resolve error types, enabling fine-grained, stage-wise preference optimization. Our approach operates within a unified offline preference learning paradigm and demonstrates consistent effectiveness across diverse open-weight models (0.5Bโ34B), including Llama, Qwen, and DeepSeek. With significantly fewer preference samples, it achieves up to a 3% absolute improvement in pass@k, markedly enhancing both correctness and robustness of generated code.
๐ Abstract
LLMs' code generation capabilities have yielded substantial improvements in the effectiveness of programming tasks. However, LLM-generated code still suffers from compilation and runtime errors. Existing offline preference optimization methods primarily focus on enhancing LLMs' coding abilities using pass/fail signals in the preference data, overlooking the deep-level error types in the failed codes. To address this, we propose Adaptively Progressive Preference Optimization (AP2O) for coding (i.e., AP2O-Coder), a method that guides LLMs adaptively and methodically to reduce code errors for code generation. Specifically, we construct an error notebook from failed codes and progressively optimize the LLM to correct errors type by type. Furthermore, we adaptively replay error types to tailor to the LLM's changing weaknesses throughout the training process. Through extensive experiments on both code and general LLMs (Llama, Qwen, and DeepSeek series) with parameters ranging from 0.5B to 34B, our AP2O-Coder improves code generation performance by up to 3% in pass@k while using less preference data. Code: https://github.com/TsingZ0/AP2O