ProjectEval: A Benchmark for Programming Agents Automated Evaluation on Project-Level Code Generation

📅 2025-03-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing benchmarks inadequately assess LLM agents’ project-level programming capabilities from the user’s perspective—lacking automation, interpretability, and realistic interaction modeling. Method: We introduce the first automated, user-interaction-simulating benchmark for project-level code generation, supporting multi-granularity inputs (e.g., natural-language requirements, code skeletons). It jointly evaluates engineering competence via execution-based functional correctness verification and structural fidelity via code similarity metrics (CodeBLEU, AST matching). To ensure high-quality, diverse test cases, we propose an LLM-synthesized data construction paradigm enhanced by human validation. Contribution/Results: Experiments identify system engineering proficiency, cross-file global understanding, and holistic analytical reasoning as critical bottlenecks hindering real-world deployment of LLM agents. The benchmark is open-sourced, providing a reproducible, diagnosable, and standardized evaluation framework for programming agent research and development.

Technology Category

Application Category

📝 Abstract
Recently, LLM agents have made rapid progress in improving their programming capabilities. However, existing benchmarks lack the ability to automatically evaluate from users' perspective, and also lack the explainability of the results of LLM agents' code generation capabilities. Thus, we introduce ProjectEval, a new benchmark for LLM agents project-level code generation's automated evaluation by simulating user interaction. ProjectEval is constructed by LLM with human reviewing. It has three different level inputs of natural languages or code skeletons. ProjectEval can evaluate the generated projects by user interaction simulation for execution, and by code similarity through existing objective indicators. Through ProjectEval, we find that systematic engineering project code, overall understanding of the project and comprehensive analysis capability are the keys for LLM agents to achieve practical projects. Our findings and benchmark provide valuable insights for developing more effective programming agents that can be deployed in future real-world production.
Problem

Research questions and friction points this paper is trying to address.

Lack of automated evaluation from user perspective
Insufficient explainability of LLM code generation results
Need for systematic project-level code evaluation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Simulates user interaction for automated evaluation
Uses multi-level inputs for comprehensive assessment
Combines execution simulation with code similarity metrics
🔎 Similar Papers