🤖 AI Summary
This work investigates large language models’ (LLMs) ability to comprehend and leverage feedback in feedback-driven code repair tasks. To this end, we introduce FeedbackEval—the first systematic benchmark covering both single-turn and multi-turn repair scenarios—and propose the first structured framework for quantifying feedback effectiveness, uncovering key properties including test feedback optimality, diminishing iterative returns, and the critical role of prompt structure—thereby advancing beyond conventional few-shot and chain-of-thought paradigms. Through comprehensive empirical evaluation across state-of-the-art models (GPT-4o, Claude-3.5, Gemini-1.5) and multi-dimensional prompt engineering, we establish three principal findings: (1) Structured, especially test-oriented, feedback substantially improves repair success rates; (2) Marginal gains diminish markedly after two to three iterative repair rounds; and (3) Prompt structures incorporating docstrings and explicit repair instructions yield optimal performance.
📝 Abstract
Code repair is a fundamental task in software development, facilitating efficient bug resolution and software maintenance. Although large language models (LLMs) have demonstrated considerable potential in automated code repair, their ability to comprehend and effectively leverage diverse types of feedback remains insufficiently understood. To bridge this gap, we introduce FeedbackEval, a systematic benchmark for evaluating LLMs' feedback comprehension and performance in code repair tasks. We conduct a comprehensive empirical study on five state-of-the-art LLMs, including GPT-4o, Claude-3.5, Gemini-1.5, GLM-4, and Qwen2.5, to evaluate their behavior under both single-iteration and iterative code repair settings. Our results show that structured feedback, particularly in the form of test feedback, leads to the highest repair success rates, while unstructured feedback proves significantly less effective. Iterative feedback further enhances repair performance, though the marginal benefit diminishes after two or three rounds. Moreover, prompt structure is shown to be critical: incorporating docstrings, contextual information, and explicit guidelines substantially improves outcomes, whereas persona-based, chain-of-thought, and few-shot prompting strategies offer limited benefits in single-iteration scenarios. This work introduces a robust benchmark and delivers practical insights to advance the understanding and development of feedback-driven code repair using LLMs.