FeedbackEval: A Benchmark for Evaluating Large Language Models in Feedback-Driven Code Repair Tasks

📅 2025-04-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work investigates large language models’ (LLMs) ability to comprehend and leverage feedback in feedback-driven code repair tasks. To this end, we introduce FeedbackEval—the first systematic benchmark covering both single-turn and multi-turn repair scenarios—and propose the first structured framework for quantifying feedback effectiveness, uncovering key properties including test feedback optimality, diminishing iterative returns, and the critical role of prompt structure—thereby advancing beyond conventional few-shot and chain-of-thought paradigms. Through comprehensive empirical evaluation across state-of-the-art models (GPT-4o, Claude-3.5, Gemini-1.5) and multi-dimensional prompt engineering, we establish three principal findings: (1) Structured, especially test-oriented, feedback substantially improves repair success rates; (2) Marginal gains diminish markedly after two to three iterative repair rounds; and (3) Prompt structures incorporating docstrings and explicit repair instructions yield optimal performance.

Technology Category

Application Category

📝 Abstract
Code repair is a fundamental task in software development, facilitating efficient bug resolution and software maintenance. Although large language models (LLMs) have demonstrated considerable potential in automated code repair, their ability to comprehend and effectively leverage diverse types of feedback remains insufficiently understood. To bridge this gap, we introduce FeedbackEval, a systematic benchmark for evaluating LLMs' feedback comprehension and performance in code repair tasks. We conduct a comprehensive empirical study on five state-of-the-art LLMs, including GPT-4o, Claude-3.5, Gemini-1.5, GLM-4, and Qwen2.5, to evaluate their behavior under both single-iteration and iterative code repair settings. Our results show that structured feedback, particularly in the form of test feedback, leads to the highest repair success rates, while unstructured feedback proves significantly less effective. Iterative feedback further enhances repair performance, though the marginal benefit diminishes after two or three rounds. Moreover, prompt structure is shown to be critical: incorporating docstrings, contextual information, and explicit guidelines substantially improves outcomes, whereas persona-based, chain-of-thought, and few-shot prompting strategies offer limited benefits in single-iteration scenarios. This work introduces a robust benchmark and delivers practical insights to advance the understanding and development of feedback-driven code repair using LLMs.
Problem

Research questions and friction points this paper is trying to address.

Evaluating LLMs' feedback comprehension in code repair
Assessing effectiveness of structured vs unstructured feedback
Analyzing impact of prompt structure on repair outcomes
Innovation

Methods, ideas, or system contributions that make the work stand out.

FeedbackEval benchmark evaluates LLMs in code repair
Structured test feedback boosts repair success rates
Prompt structure with docstrings enhances repair outcomes
🔎 Similar Papers
No similar papers found.
D
Dekun Dai
School of Software Engineering, Sun Yat-sen University, Zhuhai, China
M
MingWei Liu
School of Software Engineering, Sun Yat-sen University, Zhuhai, China
Anji Li
Anji Li
Sun Yat-sen University
AI4SEsoftware testing
Jialun Cao
Jialun Cao
The Hong Kong University of Science and Technology
SE for AIAI for SE
Y
Yanlin Wang
School of Software Engineering, Sun Yat-sen University, Zhuhai, China
C
Chong Wang
School of Computer Science and Engineering, Nanyang Technological University, Singapore
Xin Peng
Xin Peng
East China University of Science and Technology
Artificial IntelligenceMachine LearningComplex Process Modeling
Zibin Zheng
Zibin Zheng
IEEE Fellow, Highly Cited Researcher, Sun Yat-sen University, China
BlockchainSmart ContractServices ComputingSoftware Reliability