Automatically Benchmarking LLM Code Agents through Agent-Driven Annotation and Evaluation

📅 2025-10-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing code agent evaluation benchmarks suffer from prohibitively high annotation costs and overreliance on unit tests as the sole metric. Method: We propose AgentBench—the first agent-driven evaluation framework targeting real-world, project-level coding tasks—introducing the novel “Agent-as-a-Judge” paradigm and the PRDBench benchmark to enable low-cost, high-fidelity, and scalable automated assessment. Our approach integrates human supervision with agent collaboration to generate realistic tasks, and evaluates submissions holistically against structured requirements documents, reference implementations, and multi-dimensional metrics—including functionality, robustness, maintainability, and efficiency. Contribution/Results: AgentBench covers 50 authentic Python projects across 20 domains. Experiments demonstrate its effectiveness in discriminatively evaluating both code-generation and evaluation agents, establishing a new standard for project-level code agent assessment.

Technology Category

Application Category

📝 Abstract
Recent advances in code agents have enabled automated software development at the project level, supported by large language models (LLMs) and widely adopted tools. However, existing benchmarks for code agent evaluation face two major limitations: high annotation cost and expertise requirements, and rigid evaluation metrics that rely primarily on unit tests. To address these challenges, we propose an agent-driven benchmark construction pipeline that leverages human supervision to efficiently generate diverse and challenging project-level tasks. Based on this approach, we introduce PRDBench, a novel benchmark comprising 50 real-world Python projects across 20 domains, each with structured Product Requirement Document (PRD) requirements, comprehensive evaluation criteria, and reference implementations. PRDBench features rich data sources, high task complexity, and flexible metrics. We further employ an Agent-as-a-Judge paradigm to score agent outputs, enabling the evaluation of various test types beyond unit tests. Extensive experiments on PRDBench demonstrate its effectiveness in assessing the capabilities of both code agents and evaluation agents, providing a scalable and robust framework for annotation and evaluation.
Problem

Research questions and friction points this paper is trying to address.

Automated benchmark construction for code agents using human supervision
Evaluating project-level coding tasks beyond rigid unit test metrics
Assessing LLM code agents through flexible Agent-as-a-Judge paradigm
Innovation

Methods, ideas, or system contributions that make the work stand out.

Agent-driven pipeline generates benchmark tasks
PRDBench includes 50 real-world Python projects
Agent-as-a-Judge paradigm scores diverse test types
🔎 Similar Papers
No similar papers found.