🤖 AI Summary
This work addresses the lack of systematic evaluation of AI programming agents in end-to-end software project development, as existing benchmarks predominantly focus on problem-level code repair. To bridge this gap, we introduce ProjDevBench—the first multidimensional benchmark specifically designed to assess full-cycle software development capabilities. It encompasses 20 tasks spanning eight categories and integrates both automated online judging and large language model–assisted code review to holistically evaluate agents across system architecture design, functional correctness, and iterative refinement. Experiments on six state-of-the-art LLM-based coding agents reveal an overall pass rate of only 27.38%, highlighting significant deficiencies in complex system design, time complexity optimization, and resource management. These findings underscore the critical role of ProjDevBench in filling the current void in comprehensive, end-to-end AI programming evaluation.
📝 Abstract
Recent coding agents can generate complete codebases from simple prompts, yet existing evaluations focus on issue-level bug fixing and lag behind end-to-end development. We introduce ProjDevBench, an end-to-end benchmark that provides project requirements to coding agents and evaluates the resulting repositories. Combining Online Judge (OJ) testing with LLM-assisted code review, the benchmark evaluates agents on (1) system architecture design, (2) functional correctness, and (3) iterative solution refinement. We curate 20 programming problems across 8 categories, covering both concept-oriented tasks and real-world application scenarios, and evaluate six coding agents built on different LLM backends. Our evaluation reports an overall acceptance rate of 27.38%: agents handle basic functionality and data structures but struggle with complex system design, time complexity optimization, and resource management. Our benchmark is available at https://github.com/zsworld6/projdevbench.