Towards more realistic evaluation of LLM-based code generation: an experimental study and beyond

📅 2024-06-11
🏛️ arXiv.org
📈 Citations: 5
Influential: 0
📄 PDF
🤖 AI Summary
Existing LLM-based repository-level code generation evaluations ignore software evolution, leading to future-context leakage and omission of critical historical context—severely inflating model performance estimates. Method: We propose the first evolution-aware evaluation paradigm, introducing HumanEvo—a benchmark with dependency-level annotations—and an automated, execution-driven evaluation toolkit. Our methodology integrates dynamic repository snapshot sampling, dependency graph analysis, and comparative experiments across seven mainstream LLMs, with correctness validated via executable output. Contribution/Results: Empirical evaluation reveals that conventional benchmarks overestimate LLM accuracy by 10.0%–61.1%. We publicly release the HumanEvo dataset, evaluation toolkit, and full reproducibility package, establishing a new standard and foundational infrastructure for code generation research in evolutionary software contexts.

Technology Category

Application Category

📝 Abstract
To evaluate the code generation capabilities of Large Language Models (LLMs) in complex real-world software development scenarios, many evaluation approaches have been developed. They typically leverage contextual code from the latest version of a project to facilitate LLMs in accurately generating the desired function. However, such evaluation approaches fail to consider the dynamic evolution of software projects over time, which we refer to as evolving-ignored situation, leading to issues of future context leakage and useful context missing. This in turn results in inaccurate evaluation of LLMs' performance. In this paper, we conduct an empirical study to deeply understand LLMs' code generation performance within settings that reflect the evolving nature of software development. To achieve this, we first construct an evolving-aware repository-level code generation dataset, namely HumanEvo, equipped with an automated execution-based evaluation tool. Second, we manually categorize HumanEvo according to dependency levels to more comprehensively analyze the model's performance in generating functions with different dependency levels. Third, we conduct extensive experiments on HumanEvo with seven representative and diverse LLMs to verify the effectiveness of the proposed benchmark. We obtain many important findings through our experimental study. For example, we find that previous evolving-ignored evaluation approaches lead to inflated performance of the LLMs, ranging from 10.0% to 61.1%. Based on the findings, we give actionable suggestions on more realistic evaluation of LLMs on code generation. We also build a shared evolving-aware code generation toolbox to facilitate future research. Replication package including source code, datasets and appendix is available at https://github.com/DeepSoftwareAnalytics/EvoEval.
Problem

Research questions and friction points this paper is trying to address.

Evaluate LLMs' code generation in evolving software projects
Construct evolution-aware dataset for realistic LLM evaluation
Analyze LLM performance across different dependency levels
Innovation

Methods, ideas, or system contributions that make the work stand out.

Constructed evolution-aware repository-level code dataset
Developed automated execution-based evaluation tool
Categorized dataset by dependency levels for analysis
🔎 Similar Papers
No similar papers found.