Large Language Model Critics for Execution-Free Evaluation of Code Changes

📅 2025-01-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing code modification evaluation methods—such as test execution or log analysis—lack sufficient semantic granularity to support fine-grained quality assessment in multi-step agent-based software engineering. This paper introduces the first execution-agnostic, reference-aware LLM-based critique framework for joint step-level assessment of semantic correctness and executability. Our approach leverages structured reasoning and test-patch-guided semantic alignment to infer intended program behavior without runtime execution. Key contributions include: (1) edit-location–level executability prediction (F1 = 91.6%); (2) high-accuracy inference of expected program outputs (84.8% accuracy, outperforming SOTA by 38.9–72.5 percentage points); and (3) a modular, structured output format enabling seamless integration into agent workflows. We validate our framework on SWE-bench and open-source a general-purpose evaluation library, supporting plug-and-play assessment across multi-agent pipelines and benchmarks.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) offer a promising way forward for automating software engineering tasks, such as bug fixes, feature additions, etc., via multi-step LLM-based agentic workflows. However, existing metrics for evaluating such workflows, mainly build status and occasionally log analysis, are too sparse and limited in providing the information needed to assess the quality of changes made. In this work, we designed LLM-based critics to derive well-structured and rigorous intermediate/step-level, execution-free evaluation proxies for repo-level code changes. Importantly, we assume access to the gold test patch for the problem (i.e., reference-aware) to assess both semantics and executability of generated patches. With the gold test patch as a reference, we predict executability of all editing locations with an F1 score of 91.6%, aggregating which, we can predict the build status in 84.8% of the instances in SWE-bench. In particular, such an execution-focused LLM critic outperforms other reference-free and reference-aware LLM critics by 38.9% to 72.5%. Moreover, we demonstrate the usefulness of such a reference-aware framework in comparing patches generated by different agentic workflows. Finally, we open-source the library developed for this project, which allows further usage for either other agentic workflows or other benchmarks. The source code is available at https://github.com/amazon-science/code-agent-eval.
Problem

Research questions and friction points this paper is trying to address.

Large Language Models
Software Engineering
Code Modification Quality
Innovation

Methods, ideas, or system contributions that make the work stand out.

Code Evaluation
AI Code Reviewer
SWE-bench Benchmark
🔎 Similar Papers
No similar papers found.
Aashish Yadavally
Aashish Yadavally
Assistant Professor, University of Central Florida
AI4SEArtificial IntelligenceSoftware Engineering
H
Hoan Nguyen
AWS AI Labs, Santa Clara, USA
L
Laurent Callot
AWS AI Labs, Santa Clara, USA
G
Gauthier Guinet
AWS AI Labs, Santa Clara, USA